Hawkins, Robert C
2007-11-01
Turnaround time (TAT) is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This review summarises the literature regarding laboratory TAT, focusing on the different definitions, measures, expectations, published data, associations with clinical outcomes and approaches to improve TAT. It aims to provide a consolidated source of benchmarking data useful to the laboratory in setting TAT goals and to encourage introduction of TAT monitoring for continuous quality improvement. A 90% completion time (sample registration to result reporting) of <60 minutes for common laboratory tests is suggested as an initial goal for acceptable TAT. PMID:18392122
Hawkins, Robert C
2007-01-01
Turnaround time (TAT) is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This review summarises the literature regarding laboratory TAT, focusing on the different definitions, measures, expectations, published data, associations with clinical outcomes and approaches to improve TAT. It aims to provide a consolidated source of benchmarking data useful to the laboratory in setting TAT goals and to encourage introduction of TAT monitoring for continuous quality improvement. A 90% completion time (sample registration to result reporting) of <60 minutes for common laboratory tests is suggested as an initial goal for acceptable TAT. PMID:18392122
Emergency Department Overcrowding and Ambulance Turnaround Time
Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul
2015-01-01
Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183
Book Availability Revisited: Turnaround Time for Recalls versus Interlibrary Loans.
ERIC Educational Resources Information Center
Gregory, David J.; Pedersen, Wayne A.
2003-01-01
Describes a study that tests the assumption that locally owned books that an academic library patron finds unavailable due to checkout can be obtained more quickly via recall than via interlibrary loan. Establishes average turnaround time for circulation recalls for comparison with turnaround times for interlibrary loans. (Contains 50 references.)â€¦
Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Improving medical imaging report turnaround times.
Marquez, Luis O
2005-01-01
Southern Ohio Medical Center (SOMC), a 232-bed community-based teaching hospital, is equipped with state-of-the-art equipment such as 2 16-slice computed tomography (CT) scanners, 3 MR scanners, 3 ultrasound scanners, 2 digital mammography units, and 3 nuclear medicine cameras. One hundred twenty-six employees--ranging from support personnel to technologists along with 7 board-certified radiologists--staff the medical imaging department. Procedure volume is approximately 164,000 per year and is performed in all American College of Radiology (ACR)-accredited modalities. Filmless since 1998, SOMC's medical imaging department has resulted in productivity gains to the estimated 164,000 procudures for fiscal year 2005. The catalyst for the department is a robust picture archiving and communication system (PACS). Working with the radiologists, staff, and transcription services, turnaround time was reduced to from 13 hours to 9 hours from exam start to report sign off. Additional technology intervention was essential to further decrease report turnaround time. SOMC served as a beta site for a radiology information system (RIS). The new RIS has allowed the medical imaging department to move from a paper department to a "pseudo paperless" department. Orders, history sheets, consents, and other forms are scanned into the RIS for staff and radiologist use. Requisitions are no longer printed, and staff has access to review workstations to ensure that patients are called back into the department for procedures. This new workflow has also reduced paper traffic within the department. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. From its implementation, voice recognition enhanced the RIS technology. All of the radiologists began to use the product as soon as it was available. They perform all the editing and corrections either by voice command or by manual typing. The medical imaging department has noted that voice command corrections and editing are more efficient for the radiologist. The overall impact on decreased radiology report turnaround times is not only seen in medical imaging, but also has a global affect within the hospital. SOMC plans to realize a reduction length of patient stays, and a faster process for plotting the course of patient treatment, e.g., faster visits from emergency department (ED) physicians to patients. PMID:15794377
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus makingâ€¦
Turnaround Time (TAT): Difference in Concept for Laboratory and Clinician.
Pati, Hara P; Singh, Gurmeet
2014-06-01
Lab investigations are essential in patient management and qualities of the tests reports are emphasized. But there is another aspect of quality which is often overlooked and that is timeliness which is expressed as turnaround time (TAT). Mostly the laboratory services are directed at providing a rapid, reliable report at a reasonable cost. However, most laboratories put undue stress on only reliability, where as the clinician gives more stress on how soon (TAT) a report would be available to them. There is no clear definition of TAT, as to which period should be included in determining TAT for a specific test. For laboratory personnel, it would be from the time of receipt of sample in laboratory till report is generated. However, for a clinician, it would appropriate from the time of his/her requisition of a test till the report reaches him/her. The TAT would not be similar for routine tests versus in STAT/urgent tests. TAT would be different for ICU/emergency services. The causes of poor satisfaction level from lab users includes stat and routine test TAT and stat test TAT is considered by majority as the most important indicator of laboratories functioning. Hospital computerization with record of time from test request, sample collection, report generation and receipt of report by clinician would help in generating TAT. Analyzing outliers in TAT in a lab gives insight of causes delay in TAT and the areas need improvement. Laboratories in developing countries are yet to use TAT and analyze them for laboratory improvement. PMID:24839360
Alshieban, Saeed; Al-Surimi, Khaled
2015-01-01
Turnaround time is an important quality indicator in surgical pathology. Retrospective analysis of three data points in September 2014, January 2015, and February 2015 showed that on average, about a quarter (24%) of routine surgical pathology cases (26%, 19%, and 27% respectively) are not reported on time and do not meet the accepted level of the College of American Pathologists' (CAP) standard turnaround time, which states at least 90% of routine surgical pathology cases should be reported and verified within two days. Our daily observation supported by a root cause analysis exercise revealed that subprocesses including slide allocation and delivery to pathologists, slide review by pathologists, report editing by transcriptionists, and report verification by pathologists are where most delays occur. Thus, to tackle these process issues we developed a quality improvement project using the model of improvement methods to streamline the sample flow process and avoid unjustified reporting delay. The improvement ideas included developing a time log sheet to be attached with all pathology requests, and sending a daily reminder email followed by a phonecall to all pathologists to verify completed reports on the same day. These intervention elements were tested using multiple PDSA cycles resulting in a very noticeable improvement, with more than 94% of all routine cases reported in less than two days, meeting and exceeding the CAP standards. Such noticeable improvement in turnaround of surgical pathology reports will eventually lead to improving the quality and safety of patient care outcome, including diagnosing patients on time, developing the appropriate treatment plan, and avoiding unjustified complications resulting in morbidity and mortality due to delayed reports. PMID:26734438
TOXLINE Toxicology Bibliographic Information
Storrow AB; Zhou C; Gaddis G; Han JH; Miller K; Klubert D; Laidig A; Aronsky D
2008-11-01
BACKGROUND: The effect of decreasing lab turnaround times on emergency department (ED) efficiency can be estimated through system-level simulation models and help identify important outcome measures to study prospectively. Furthermore, such models may suggest the advantage of bedside or point-of-care testing and how they might affect efficiency measures.OBJECTIVES: The authors used a sophisticated simulation model in place at an adult urban ED with an annual census of 55,000 patient visits. The effect of decreasing turnaround times on emergency medical services (EMS) diversion, ED patient throughput, and total ED length of stay (LOS) was determined.METHODS: Data were generated by using system dynamics analytic modeling and simulation approach on 90 separate days from December 2, 2007, through February 29, 2008. The model was a continuous simulation of ED flow, driven by real-time actual patient data, and had intrinsic error checking to assume reasonable goodness-of-fit. A return of complete laboratory results incrementally at 120, 100, 80, 60, 40, 20, and 10 minutes was compared. Diversion calculation assumed EMS closure when more than 10 patients were in the waiting room and 100% ED bed occupancy had been reached for longer than 30 minutes, as per local practice. LOS was generated from data insertion into the patient flow stream and calculation of time to specific predefined gates. The average accuracy of four separate measurement channels (waiting room volume, ED census, inpatient admit stream, and ED discharge stream), all across 24 hours, was measured by comparing the area under the simulated curve against the area under the measured curve. Each channel's accuracy was summed and averaged for an overall accuracy rating.RESULTS: As lab turnaround time decreased from 120 to 10 minutes, the total number of diversion days (maximum 57 at 120 minutes, minimum 29 at 10 minutes), average diversion hours per day (10.8 hours vs. 6.0 hours), percentage of days with diversion (63% vs. 32%), and average ED LOS (2.77 hours vs. 2.17 hours) incrementally decreased, while average daily throughput (104 patients vs. 120 patients) increased. All runs were at least 85% accurate.CONCLUSIONS: This simulation model suggests compelling improvement in ED efficiency with decreasing lab turnaround time. Outcomes such as time on EMS diversion, ED LOS, and ED throughput represent important but understudied areas that should be evaluated prospectively. EDs should consider processes that will improve turnaround time, such as point-of-care testing, to obtain these goals.
A strategy for reducing turnaround time in design optimization using a distributed computer system
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Padula, Sharon L.; Rogers, James L.
1988-01-01
There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.
Predictable turn-around time for post tape-out flow
NASA Astrophysics Data System (ADS)
Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya
2012-03-01
A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.
24 CFR 901.10 - Indicator #1, vacancy rate and unit turnaround time.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicator #1, vacancy rate and unit... rate and unit turnaround time. This indicator examines the vacancy rate, a PHA's progress in reducing.... This indicator has a weight of x2. (a) For the calculation of the actual and adjusted vacancy rate...
Insertable system for fast turnaround time microwave experiments in a dilution refrigerator
NASA Astrophysics Data System (ADS)
Ong, Florian R.; Orgiazzi, Jean-Luc; de Waard, Arlette; Frossati, Giorgio; Lupascu, Adrian
2012-09-01
Microwave experiments in dilution refrigerators are a central tool in the field of superconducting quantum circuits and other research areas. This type of experiments relied so far on attaching a device to the mixing chamber of a dilution refrigerator. The minimum turnaround time in this case is a few days as required by cooling down and warming up the entire refrigerator. We developed a new approach, in which a suitable sample holder is attached to a cold-insertable probe and brought in contact with transmission lines permanently mounted inside the cryostat. The total turnaround time is 8 h if the target temperature is 80 mK. The lowest attainable temperature is 30 mK. Our system can accommodate up to six transmission lines, with a measurement bandwidth tested from zero frequency to 12 GHz. This bandwidth is limited by low-pass components in the setup; we expect the intrinsic bandwidth to be at least 18 GHz. We present our setup, discuss the experimental procedure, and give examples of experiments enabled by this system. This new measurement method will have a major impact on systematic ultra-low temperature studies using microwave signals, including those requiring quantum coherence.
Kilgore, M L; Steindel, S J; Smith, J A
1998-08-01
We compared centralized vs distributed methods for delivering "stat" test results for blood gas, glucose, and electrolyte assays. The parameters for comparison were as follows: (a) laboratory turnaround time (TAT), (b) therapeutic TAT, and (c) staff satisfaction. Therapeutic TAT, defined as the time from the initiating order to the receipt of the result and the implementation of any indicated change in treatment, was obtained by direct observation of testing procedures at the bedside and timing each step in the process. Observing therapeutic TAT yields information on the impact of laboratory testing methods in the context of clinical decision making. Therapeutic TAT was 1-2 min shorter for bedside testing compared with a satellite laboratory and 9-14 min shorter in the satellite laboratory compared with centralized testing. Satellite laboratories received the highest staff satisfaction scores, followed by bedside testing, with the central laboratory receiving the lowest scores. PMID:9702944
Averaging facial expression over time
Haberman, Jason; Harp, Tom; Whitney, David
2010-01-01
The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064
Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh
2015-09-01
The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays. PMID:26068719
Moore, K; Ackerman, K; Wareham, L; Stephens, D
2000-01-01
Developing a strategy and turnaround plan for a struggling medical group practice requires a combination of art and science. Orchestrating an effective about-face calls for decisive action on several fronts at once. Too many groups are slow to recognize the need to rescue a perilous situation. And rather than taking immediate action on multiple fronts, they try taking incremental steps over time. Implementing a successful turnaround requires both the diligence of follow-through and the vision to make necessary adjustments throughout the process. PMID:11067111
Flexible time domain averaging technique
NASA Astrophysics Data System (ADS)
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Efficiency of an Automated Reception and Turnaround Time Management System for the Phlebotomy Room
Yun, Soon Gyu; Park, Eun Su; Bang, Hae In; Kang, Jung Gu
2016-01-01
Background Recent advances in laboratory information systems have largely been focused on automation. However, the phlebotomy services have not been completely automated. To address this issue, we introduced an automated reception and turnaround time (TAT) management system, for the first time in Korea, whereby the patient's information is transmitted directly to the actual phlebotomy site and the TAT for each phlebotomy step can be monitored at a glance. Methods The GNT5 system (Energium Co., Ltd., Korea) was installed in June 2013. The automated reception and TAT management system has been in operation since February 2014. Integration of the automated reception machine with the GNT5 allowed for direct transmission of laboratory order information to the GNT5 without involving any manual reception step. We used the mean TAT from reception to actual phlebotomy as the parameter for evaluating the efficiency of our system. Results Mean TAT decreased from 5:45 min to 2:42 min after operationalization of the system. The mean number of patients in queue decreased from 2.9 to 1.0. Further, the number of cases taking more than five minutes from reception to phlebotomy, defined as the defect rate, decreased from 20.1% to 9.7%. Conclusions The use of automated reception and TAT management system was associated with a decrease of overall TAT and an improved workflow at the phlebotomy room. PMID:26522759
Finding the Missing Time in the Instantaneous Turnaround Version of the Twin Paradox
NASA Astrophysics Data System (ADS)
Kohler, Matthew
2006-11-01
The change in the reading on a distant Earth clock from the point of view of a traveling twin who instantaneously reverses direction is calculated by considering the behavior of the Earth clock during the turnaround, performing an integral, and taking a limit.
Preparing printed circuit boards for rapid turn-around time on a plotter
Hawtree, J.
1998-01-01
This document describes the use of the LPKF ProtoMat mill/drill unit circuit board Plotter, with the associated CAD/CAM software BoardMaster and CircuitCAM. At present its primarily use here at Fermilab`s Particle Physics Department is for rapid-turnover of prototype PCBs double-sided and single-sided copper clad printed circuit boards (PCBs). (The plotter is also capable of producing gravure films and engraving aluminum or plastic although we have not used it for this.) It has the capability of making traces 0.004 inch wide with 0.004 inch spacings which is appropriate for high density surface mount circuits as well as other through-mounted discrete and integrated components. One of the primary benefits of the plotter is the capability to produce double-sided drilled boards from CAD files in a few hours. However to achieve this rapid turn-around time, some care must be taken in preparing the files. This document describes how to optimize the process of PCB fabrication. With proper preparation, researchers can often have a completed circuit board in a day`s time instead of a week or two wait with usual procedures. It is assumed that the software and hardware are properly installed and that the machinist is acquainted with the Win95 operating system and the basics of the associated software. This paper does not describe its use with pen plotters, lasers or rubouts. The process of creating a PCB (printed circuit board) begins with the CAD (computer-aided design) software, usually PCAD or VeriBest. These files are then moved to CAM (computer-aided machining) where they are edited and converted to put them into the proper format for running on the ProtoMat plotter. The plotter then performs the actual machining of the board. This document concentrates on the LPKF programs CircuitCam BASIS and BoardMaster for the CAM software. These programs run on a Windows 95 platform to run an LPKF ProtoMat 93s plotter.
NASA Astrophysics Data System (ADS)
Benard, William L.; Gabriel, Kaigham J.; Huff, Michael A.
2000-08-01
The MEMS Exchange is a DARPA funded research program to provide the domestic MEMS community with an interface between MEMS designers and microfabrication facilities to allows designer to develop and exercise custom process sequences. These process sequences may cross the boundaries separating fabrication facilities in order to maximize process and design freedom. Information transfer within the MEMS-Exchange fabrication network takes place almost exclusively via the Internet, so process information and metrology data is available to customers near real time. In order for the designers to utilize such a facility, basic operational information about the execution of a process sequence, such as price, turn-around time and quality must be available. Time stamps, process metrology and cost data are all captured as part of routine MEMS Exchange operation. Special operations instrumentation has also been implemented in a software tool called the Process Event Logger to measure process requirements in terms of machine and staff time, and to determine the costs of delayed information, materials and equipment availability. The initial data is presented which is intended to be used to develop a model for MEMS prototyping fabrication to determine expected cost, turn-around time and quality for custom process sequences.
Random time averaged diffusivities for Lévy walks
NASA Astrophysics Data System (ADS)
Froemberg, D.; Barkai, E.
2013-07-01
We investigate a Lévy walk alternating between velocities ±v0 with opposite sign. The sojourn time probability distribution at large times is a power law lacking its mean or second moment. The first case corresponds to a ballistic regime where the ensemble averaged mean squared displacement (MSD) at large times is ?x2? ? t2, the latter to enhanced diffusion with ?x2? ? t?, 1 < ? < 2. The correlation function and the time averaged MSD are calculated. In the ballistic case, the deviations of the time averaged MSD from a purely ballistic behavior are shown to be distributed according to a Mittag-Leffler density function. In the enhanced diffusion regime, the fluctuations of the time averages MSD vanish at large times, yet very slowly. In both cases we quantify the discrepancy between the time averaged and ensemble averaged MSDs.
Kallen, Michael E; Sim, Myung S; Radosavcev, Bryan L; Humphries, Romney M; Ward, Dawn C; Apple, Sophia K
2015-10-01
Breast pathology relies on gross dissection for accurate diagnostic work, but challenges can necessitate submission of high tissue volumes resulting in excess labor, laboratory costs, and delays. To address these issues, a quality initiative was created through implementation of the Faxitron PathVision specimen radiography system as part of the breast gross dissection protocol; this report documents its impact on workflow and clinical care. Retrospective data from 459 patients who underwent simple or modified radical mastectomy at our institution between May 2012 and December 2014 were collected. Comparison was made between the mastectomy specimen control group before radiography use (233 patients, 340 breasts) and Faxitron group that underwent postoperative radiography (226 patients, 338 breasts). We observed a statistically significant decrease in mean number of blocks between control and Faxitron groups (47.0 vs 39.7 blocks; P<.0001), for calculated cost savings of US $146 per mastectomy. A statistically significant decrease in pathology report turnaround time was also observed (4.2 vs 3.8days; P=.038). Postoperative mastectomy specimen radiography has increased workflow efficiency and decreased histology costs and pathology report turnaround time. These findings may underestimate actual benefits and highlight the importance of quality improvement projects in anatomical pathology. PMID:26277043
Statistics of time averaged atmospheric scintillation
Stroud, P.
1994-02-01
A formulation has been constructed to recover the statistics of the moving average of the scintillation Strehl from a discrete set of measurements. A program of airborne atmospheric propagation measurements was analyzed to find the correlation function of the relative intensity over displaced propagation paths. The variance in continuous moving averages of the relative intensity was then found in terms of the correlation functions. An empirical formulation of the variance of the continuous moving average of the scintillation Strehl has been constructed. The resulting characterization of the variance of the finite time averaged Strehl ratios is being used to assess the performance of an airborne laser system.
Turnaround in Cyclic Cosmology
Baum, Lauris; Frampton, Paul H.
2007-02-16
It is speculated how dark energy in a brane world can help reconcile an infinitely cyclic cosmology with the second law of thermodynamics. A cyclic model is described, in which dark energy with w<-1 equation of state leads to a turnaround at a time, extremely shortly before the would-be big rip, at which both volume and entropy of our Universe decrease by a gigantic factor, while very many independent similarly small contracting universes are spawned. The entropy of our model decreases almost to zero at turnaround but increases for the remainder of the cycle by a vanishingly small amount during contraction, empty of matter, then by a large factor during inflationary expansion.
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
Analysis of averaged multichannel delay times
NASA Astrophysics Data System (ADS)
Kelkar, N. G.; Nowakowski, M.
2008-07-01
The physical significances and the pros and cons involved in the usage of different time-delay formalisms are discussed. The delay-time matrix introduced by Eisenbud, where only s waves participate in a reaction, is in general related to the definition of an angular time delay which is shown not to be equivalent to the so-called phase time delay of Eisenbud and Wigner even for single channel scattering. Whereas the expression due to Smith which is derived from a time-delayed radial wave packet is consistent with a lifetime matrix which is Hermitian, this is not true for any Eisenbud-type lifetime matrix which violates time-reversal invariance. Extending the angular time delay of Nussenzveig to multiple channels, we show that if one performs an average over the directions and subtracts the forward angle contribution containing an interference of the incident and scattered waves, the multichannel angle-dependent average time delay reduces to the one given by Smith. The present work also rectifies a recently misinterpreted misnomer of the relation due to Smith.
Angeletti, Silvia; De Cesaris, Marina; Hart, Jonathan George; Urbano, Michele; Vitali, Massimiliano Andrea; Fragliasso, Fulvio; Dicuonzo, Giordano
2015-12-01
Intra-laboratory turnaround time (TAT) is a key indicator of laboratory performance. Improving TAT is a complex task requiring staff education, equipment acquisition, and adequate TAT monitoring. The aim of the present study was to evaluate the intra-laboratory TAT after laboratory automation implementation (June 2013-June 2014) and to compare it to that in the preautomation period (July 2012-May 2013). Intra-laboratory TAT was evaluated both as the mean TAT registered and the percentage of outlier (OP) exams. The mean TAT was 36, 38, and 34 min during the study periods, respectively. These values respected the goal TAT established at 45 min. The OP, calculated at 45 min as well as at 60 min, decreased from 26 to 21 and from 11 to 5, respectively. From a focused analysis on blood count cell, troponin I, and prothrombin (PT) test, TAT improvement was more evident for tests requiring longer preanalytical process. The follow-up of TAT from June 2013 to June 2014 revealed the reduction of the mean TAT as well as of the OP exams after automation implementation and that automation more strongly affects the test in the preanalytical phase including centrifugation of the sample, such as troponin I and PT. PMID:25609253
Ambulance handovers: can a dedicated ED nurse solve the delay in ambulance turnaround times?
Clarey, A; Allen, M; Brace-McDonnell, S; Cooke, M W
2014-05-01
With ever increasing concern over ambulance handover delays this paper looks at the impact of dedicated A&E nurses for ambulance handovers and the effect it can have on ambulance waiting times. It demonstrates that although such roles can bring about reduced waiting times, it also suggests that using this as a sole method to achieve these targets would require unacceptably low staff utilisation. PMID:23636604
Mehra, Bhanu; Rawat, Deepti; Saxena, Shikhar
2015-01-01
Background Timeliness of reporting is of utmost importance to limit the spread of syphilis. The present analysis was undertaken to evaluate the turnaround time of syphilis testing (mainly Venereal disease research laboratory /VDRL test) in a sexually transmitted infections (STI) clinic in India; to find out the possible reasons for delay; to describe the trends of clinical indications for syphilis testing from an STI clinic; to assess the frequency of a positive syphilis serology among STI clinic attendees; and to analyse the follow-up rates of VDRL report collection. Materials and Methods Two hundred consecutive VDRL requests received at the serology laboratory of a tertiary care health facility from the STI clinic of the linked hospital were prospectively analysed to evaluate the above parameters. Results For the 200 requests audited, the mean absolute turnaround time of VDRL test was 7.46Â±2.81 days. The mean duration of the pre-laboratory, laboratory and post laboratory phases was 0, 4.69Â±2.13 and 2.77Â±2.51 days respectively. The interval from specimen receipt to performance of tests (mean duration=4.25Â±1.96 days) was the major reason for long VDRL turnaround time. The common indications for syphilis testing in STI clinic attendees were lower abdominal pain (33%), vaginal discharge (26.5%) and genital ulcer disease (9%); and the follow-up rate for report collection was 71%. Conclusion Our study highlights the strong need to shift to alternative testing methods, mainly rapid point of care procedures for serodiagnosis of syphilis in order to circumvent the problems of long turnaround time and low patient follow-up rates. PMID:26435966
Rosenkrantz, Andrew B; Bonavita, John A; Foran, Mark P; Matza, Brent W; McMenamy, John M
2014-02-01
The aim of this study was to assess for an association between radiologists' turnaround time (TAT) and report quality for emergency department (ED) abdominopelvic CT examinations. Reports of 60 consecutive ED abdominopelvic CT studies from five abdominal radiologists (300 total reports) were included. An ED radiologist, abdominal radiologist, and ED physician independently evaluated satisfaction with report content (1-10 scale), satisfaction with report clarity (1-10 scale), and extent to which the report advanced the patient on a previously published clinical spectrum scale (1-5 scale). TAT (time between completion of imaging and completion of the final report) and report quality were compared between radiologists using unpaired t tests; associations between TAT and report quality scores for individual radiologists were assessed using Pearson's correlation coefficients. The five radiologists' mean TAT varied from 35 to 53 min. There were significant differences in report content in half of comparisons between radiologists by observer 1 (p???0.032) and in a minority of comparisons by observer 2 (p???0.047), in report clarity in majority of comparisons by observer 1 (p???0.031) and in a minority of comparisons by observer 2 (p???0.010), and in impact on patient care in a minority of comparisons for all observers (p???0.047). There were weak positive correlations between TAT and report content and clarity for three radiologists for observer 1 (r?=?0.270-0.362) and no correlation between TAT and any report quality measure for remaining combinations of the five radiologists and three observers (r?=?-0.197 to +0.181). While both TAT and report quality vary between radiologists, these two factors were not associated for individual radiologists. PMID:24197655
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
ERIC Educational Resources Information Center
Smarick, Andy
2010-01-01
For as long as there have been struggling schools in America's cities, there have been efforts to turn them around. But overall, school turnaround efforts have consistently fallen far short of hopes and expectations. Quite simply, turnarounds are not a scalable strategy for fixing America's troubled urban school systems. Fortunately, findings fromâ€¦
ERIC Educational Resources Information Center
Smarick, Andy
2010-01-01
For as long as there have been struggling schools in America's cities, there have been efforts to turn them around. But overall, school turnaround efforts have consistently fallen far short of hopes and expectations. Quite simply, turnarounds are not a scalable strategy for fixing America's troubled urban school systems. Fortunately, findings from…
Angeletti, Silvia; Dicuonzo, Giordano; D'Agostino, Alfio; Avola, Alessandra; Crea, Francesca; Palazzo, Carlo; Dedej, Etleva; De Florio, Lucia
2015-07-01
A comparative evaluation of the turnaround time (TAT) of positive blood culture before and after matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) introduction in the laboratory routine was performed. A total of 643 positive blood cultures, of which 310 before and 333 after MALDI-TOF technique introduction, were collected. In the post MALDI-TOF period, blood culture median TAT decreased from 73.53 hours to 71.73 for Gram-positive, from 64.09 hours to 63.59 for Gram-negative and from 115.7 hours to 47.62 for anaerobes. MALDI-TOF significantly decreased the TAT of anaerobes, for which antimicrobial susceptibility test is not routinely performed. Furthermore, the major advantage of MALDI-TOF introduction was the decrease of the time for pathogen identification (TID) independently from the species with an improvement of 93% for Gram-positive, 86% for Gram-negative and 95% for anaerobes. In addition, high species-level identification rates and cost savings than conventional methods were achieved after MALDI-TOF introduction. PMID:26147149
Edwards, Thomas
2005-09-01
The Analytical Development Section (ADS) and the Statistical Consulting Section (SCS) of the Savannah River National Laboratory (SRNL) are participating in a Six Sigma initiative to improve the Defense Waste Processing Facility (DWPF) Laboratory. The Six Sigma initiative has focused on reducing the analytical turnaround time of samples from the Slurry Mix Evaporator (SME) by developing streamlined sampling and analytical methods [1]. The objective of Phase I was to evaluate the sub-sampling of a larger sample bottle and the performance of a cesium carbonate (Cs{sub 2}CO{sub 3}) digestion method. Successful implementation of the Cs{sub 2}CO{sub 3} fusion method in the DWPF would have important time savings and convenience benefits because this single digestion would replace the dual digestion scheme now used. A single digestion scheme would result in more efficient operations in both the DWPF shielded cells and the inductively coupled plasma--atomic emission spectroscopy (ICP-AES) laboratory. By taking a small aliquot of SME slurry from a large sample bottle and dissolving the vitrified SME sample with carbonate fusion methods, an analytical turnaround time reduction from 27 hours to 9 hours could be realized in the DWPF. This analytical scheme has the potential for not only dramatically reducing turnaround times, but also streamlining operations to minimize wear and tear on critical shielded cell components that are prone to fail, including the Hydragard{trademark} sampling valves and manipulators. Favorable results from the Phase I tests [2] led to the recommendation for a Phase II effort as outlined in the DWPF Technical Task Request (TTR) [3]. There were three major tasks outlined in the TTR, and SRNL issued a Task Technical and QA Plan [4] with a corresponding set of three major task activities: (1) Compare weight percent (wt%) total solids measurements of large volume samples versus peanut vial samples. (2) Evaluate Cs{sub 2}CO{sub 3} and K{sub 2}CO{sub 3} fusion methods using DWPF simulated glass (part 1) and (to the extent possible) using DWPF radioactive glass (part 2). (3) Recommend specific equipment to implement the new methods and work with SRNL's Equipment Engineering Section (EES) to ensure compatibility of the equipment with the DWPF shielded cells. The focus of this report is major task activity No.2 (parts 1 and 2). The other two major task activities are to be addressed separately. The measurements supporting task activity No.1 are being conducted at the DWPF. EES is currently designing and fabricating the DWPF cell equipment needed for the new method as part of major task activity No.3.
Evaluation of Operation Turnaround.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL. Office of Educational Accountability.
In April of 1980, the following goals were adopted for Operation Turnaround: (1) to raise significantly the achievement levels of students; (2) to develop positive staff perceptions with respect to the children they teach and the children's potential for growth; (3) to build at each school a cohesive, committed, and competent staff which would…
Turnaround Principal Competencies
ERIC Educational Resources Information Center
Steiner, Lucy; Barrett, Sharon Kebschull
2012-01-01
When the Minneapolis Public Schools first set out to hire turnaround school principals, administrators followed their usual process--which focused largely on reputation and anecdotal support and considered mainly internal candidates. Yet success at the complicated task of turning around the fortunes of a failing school depends on exceptionally…
Turnaround Momentum Still Fragile
ERIC Educational Resources Information Center
Klein, Alyson
2012-01-01
The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by aâ€¦
Turnaround Momentum Still Fragile
ERIC Educational Resources Information Center
Klein, Alyson
2012-01-01
The federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains a work in progress after two years, with more than 1,200 turnaround efforts under way but still no definitive verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a…
TIME INVARIANT MULTI ELECTRODE AVERAGING FOR BIOMEDICAL SIGNALS
Orellana, R. Martinez; Erem, B.; Brooks, D.H.
2013-01-01
One of the biggest challenges in averaging ECG or EEG signals is to overcome temporal misalignments and distortions, due to uncertain timing or complex non-stationary dynamics. Standard methods average individual leads over a collection of epochs on a time-sample by time-sample basis, even when multi-electrode signals are available. Here we propose a method that averages multi electrode recordings simultaneously by using spatial patterns and without relying on time or frequency. PMID:24465170
Time average vibration fringe analysis using Hilbert transformation
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-10-20
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Wesolowski, Laura G.; Nasrullah, Muazzam; Coombs, Robert W.; Rosenberg, Eric; Ethridge, Steven F.; Hutchinson, Angela B.; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S.; Madory, James E.; Werner, Barbara G.
2016-01-01
Background.â€To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods.â€We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results.â€From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions.â€Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT. PMID:26798766
Wesolowski, Laura G; Nasrullah, Muazzam; Coombs, Robert W; Rosenberg, Eric; Ethridge, Steven F; Hutchinson, Angela B; Dragavon, Joan; Rychert, Jennifer; Nolte, Frederick S; Madory, James E; Werner, Barbara G
2016-01-01
Background. â€To improve clinical and public health outcomes through early human immunodeficiency virus (HIV) detection, fourth-generation antigen/antibody immunoassay (4IA) and supplemental testing results must be returned rapidly. Methods. â€We examined HIV testing data at Harborview Medical Center (HMC), Massachusetts General Hospital (MGH), and the Medical University of South Carolina (MUSC), which used 4IA and supplemental antibody and nucleic acid tests (NATs). At MGH and MUSC, HIV-1 Western blot (WB) and HIV-2 testing were conducted at a reference laboratory. We compared time from specimen collection to laboratory result for established (positive WB) and acute infections (reactive 4IA, negative/indeterminate WB, detectable NAT), and we calculated testing cost per positive-test result. Results. â€From 3731 (MUSC) to 19 774 (MGH) tests were conducted; 0.01% (MGH) to 0.05% (HMC) were acute infections. Each laboratory had reactive 4IA, WB-negative, or indeterminate specimens without NAT (ie, potential acute infections). Time to result was 1.5 (HMC) to 5.2 days (MGH) for acute and 1.0 (HMC) to 5.2 days (MGH) for established infections. Costs were $1054 (MGH) to $1521 (MUSC). Conclusions. â€Conducting supplemental testing in-house lowered turnaround times, which may be further reduced with rapid HIV-1/HIV-2 differentiation tests. Hospitals may benefit from quantitative NATs not requiring physician orders, so all potential acute infections receive NAT. PMID:26798766
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
McHugh, Martin P; Gray, Steve J; Kaczmarski, Edward B; Guiver, Malcolm
2015-11-01
Since 1996 the Meningococcal Reference Unit (MRU) in Manchester has provided a national service for PCR confirmation of meningococcal and pneumococcal disease. Neisseria meningitidis serogroup B is predominant in the UK, accounting for >60?% of cases. In response to this, the MRU has developed a quadruplex quantitative PCR that detects N. meningitidis capsule transporter (ctrA), serogroup B sialyltransferase (siaDB), Streptococcus pneumoniae pneumolysin (ply) and an internal control. The assay was prepared in a ready-to-use lyophilized format by Applied Biosystems. Laboratory validation showed excellent performance in a specificity panel of 52 isolates and improved detection in comparison with the routine assay. Testing of 244 patient samples showed sensitivity of 93?% [95?% confidence interval (CI): 88-98?%] for the ctrA assay, 95?% (95?% CI: 91-100?%) for the siaDB assay and 100?% (95?% CI: 95-100?%) for the ply assay. Specificity was 100?% (95?% CI: 98-100?%) for both meningococcal targets and 95?% (95?% CI: 92-98?%) for ply. The quadruplex also retained high performance in mixed samples and had acceptable reproducibility. After introduction of the quadruplex into routine use the turnaround time for N. meningitidis group B PCR confirmation reduced from 37 to 29?h and the internal control has proved useful for detecting inhibitory samples. The quadruplex assay provides rapid group B confirmation of meningococcal positive samples, enabling timely public health interventions for the most common disease-causing meningococcal serogroup in the UK. PMID:26253287
Off the Clock: What More Time Can (and Can't) Do for School Turnarounds. Education Sector Reports
ERIC Educational Resources Information Center
Silva, Elena
2012-01-01
If less time in the classroom is a cause of poor student performance, can adding more time be the cure? This strategy underlies a major effort to fix the nation's worst public schools. Billions of federal stimulus dollars are being spent to expand learning time on behalf of disadvantaged children. And extended learning time (ELT) is being proposedâ€¦
Human Capital in Turnaround Schools
ERIC Educational Resources Information Center
Ferris, Kristen
2012-01-01
Finding, keeping and supporting great educators presents the single biggest challenge to successful school turnarounds. Without teachers and administrators who bring the needed combination of skills and passion, nothing else will achieve the desired effect. The turnaround model supported by the U.S. Department of Education School Improvement Grant…
Making Sense of School Turnarounds
ERIC Educational Resources Information Center
Hess, Frederick M.
2012-01-01
Today, in a sector flooded with $3.5 billion in School Improvement Grant funds and the resulting improvement plans, there's great faith that "turnaround" strategies are a promising way to tackle stubborn problems with persistently low-performing schools. Unlike traditional reform efforts, with their emphasis on incremental improvement, turnarounds…
Daniel, Suzanne; Patel, Janki; Benson, Debbie
2014-01-01
Many patients approaching the end of their life express the preference to die at home,[1] although unfortunately the majority of people will still die in hospital.[2] For patients approaching the end of their life, it was noted anecdotally that often those who have expressed a preference to go home from hospital for end of life care may have their discharge delayed due to problems in the prescribing of common medications used to alleviate distressing symptoms at the end of life. An initial audit at Conquest Hospital showed an 89% error rate in these prescriptions, mostly related to prescribing controlled drugs such as morphine and midazolam. A single standardised dispensing chart for commonly prescribed medications at the end of life, in the form of both “Just in Case” medications and syringe driver medications, was created which addressed this problem by having the medications pre-written so as to meet all legal requirements for controlled drugs. The prescriber is able to choose and fill out an appropriate drug and dose by using flow-chart information overleaf and then sign the prescription to allow it to be dispensed. After an initial two month pilot period, a re-audit showed a significant fall in error rate down to 11%, as well as an improvement in turnaround time in dispensing the medications.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
An averaging analysis of discrete-time indirect adaptive control
NASA Technical Reports Server (NTRS)
Phillips, Stephen M.; Kosut, Robert L.; Franklin, Gene F.
1988-01-01
An averaging analysis of indirect, discrete-time, adaptive control systems is presented. The analysis results in a signal-dependent stability condition and accounts for unmodeled plant dynamics as well as exogenous disturbances. This analysis is applied to two discrete-time adaptive algorithms: an unnormalized gradient algorithm and a recursive least-squares (RLS) algorithm with resetting. Since linearization and averaging are used for the gradient analysis, a local stability result valid for small adaptation gains is found. For RLS with resetting, the assumption is that there is a long time between resets. The results for the two algorithms are virtually identical, emphasizing their similarities in adaptive control.
Time-averaged photon-counting digital holography.
Demoli, Nazif; Skenderovi?, Hrvoje; Stip?evi?, Mario
2015-09-15
Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907
NASA Astrophysics Data System (ADS)
Sun, Yu; Dai, Meifeng; Xi, Lifeng
Recent work on the networks has focused on the weighted hierarchical networks that are significantly different from the un-weighted hierarchical networks. In this paper we study a family of weighted hierarchical networks which are recursively defined from an initial uncompleted graph, in which the weights of edges have been assigned to different values with certain scale. Firstly, we study analytically the average weighted shortest path (AWSP) on the weighted hierarchical networks. Using a recursive method, we determine explicitly the AWSP. The obtained rigorous solution shows that the networks grow unbounded but with the logarithm of the network size, while the weighted shortest paths stay bounded. Then, depending on a biased random walk, we research the mean first-passage time (MFPT) between a hub node and any peripheral node. Finally, we deduce the analytical expression of the average of MFPTs for a random walker originating from any node to first visit to a hub node, which is named as the average receiving time (ART). The obtained result shows that ART is bounded or grows sublinearly with the network order relating to the number of initial nodes and the weighted factor or grows quadratically with the iteration.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Time-average TV holography for vibration fringe analysis
Kumar, Upputuri Paul; Kalyani, Yanam; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2009-06-01
Time-average TV holography is widely used method for vibration measurement. The method generates speckle correlation time-averaged J0 fringes that can be used for full-field qualitative visualization of mode shapes at resonant frequencies of an object under harmonic excitation. In order to map the amplitudes of vibration, quantitative evaluation of the time-averaged fringe pattern is desired. A quantitative evaluation procedure based on the phase-shifting technique used in two beam interferometry has also been adopted for this application with some modification. The existing procedure requires a large number of frames to be recorded for implementation. We propose a procedure that will reduce the number of frames required for the analysis. The TV holographic system used and the experimental results obtained with it on an edge-clamped, sinusoidally excited square aluminium plate sample are discussed.
An Advanced Time Averaging Modelling Technique for Power Electronic Circuits
NASA Astrophysics Data System (ADS)
Jankuloski, Goce
For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.
Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun
2015-01-01
Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them. PMID:26110033
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Time-averaged adiabatic ring potential for ultracold atoms
Sherlock, B. E.; Gildemeister, M.; Owen, E.; Nugent, E.; Foot, C. J.
2011-04-15
We report the experimental realization of a versatile ring trap for ultracold atoms. The ring geometry is created by the time-averaged adiabatic potential resulting from the application of an oscillating magnetic bias field to a rf-dressed quadrupole trap. Lifetimes for a Bose-Einstein condensate in the ring exceed 11s and the ring radius was continuously varied from 50 {mu}m to 262 {mu}m. An efficient method of loading the ring from a conventional time-averaged orbiting potential trap is presented together with a rotation scheme which introduces angular momentum into the system. The ring presents an opportunity to study the superfluid properties of a condensate in a multiply connected geometry and also has applications for matter-wave interferometry.
Adjoint sensitivity analysis of time averaged quantities for unsteady flows
NASA Astrophysics Data System (ADS)
Wang, Qiqi
2011-11-01
Sensitivity analysis is an essential gradient for data assimilation, aerodynamic design, uncertainty quantification and optimal flow control. In particular. the adjoint sensitivity analysis method has been shown to solve very high dimensional optimization problems typically found these applications. This talk focuses on recent developments in extending adjoint sensitivity analysis to unsteady flows. The adjoint equation of unsteady flows must be integrated backwards in time. Each backward time step must use the flow solution at the corresponding time. As a result, the entire time history of the flow solution must be either stored or recalculated. The invention of checkpointing schemes provides an economic solution to this challenge. In particular, the dynamic checkpointing scheme makes this solution more practical for computational fluid dynamics problems. In unsteady flows, the quantities of interest are often long time averages. We demonstrate that sensitivity analysis of these long time averaged quantities poses significant new challenge. A novel windowing scheme is developed to compute correct sensitivity for periodic unsteady flows, such as in laminar vortex shedding. Initial investigation of sensitivity analysis of chaotic unsteady flows, i.e., transitional and turbulent flows, is also discussed. This work is supported by a subcontract of PSAAP program at Stanford awarded to MIT.
Average waiting time profiles of uniform DQDB model
Rao, N.S.V.; Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D.
1993-09-07
The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.
Stability of stochastic nonlinear switched systems with average dwell time
NASA Astrophysics Data System (ADS)
Wu, Xiaotai; Yan, Litan; Zhang, Wenbing; Tang, Yang
2012-03-01
This paper concerns the stability of nonlinear stochastic switched systems with delay. By using the Razumikhin method and the average dwell time approach, some sufficient criteria of stability are derived. It is shown that when the Lyapunov-like functions are decreasing in all active subsystems, the switched system is pth-moment exponentially stable. Moreover, it is also shown that under some conditions the system can be pth-moment stable and global asymptotically stable, where the Lyapunov-like functions are increasing on some intervals. The effectiveness of the proposed results is illustrated by two examples.
Analytical calculation of average fixation time in evolutionary graphs
NASA Astrophysics Data System (ADS)
Askari, Marziyeh; Samani, Keivan Aghababaei
2015-10-01
The ability of a mutant individual to overtake the whole of a population is one of the fundamental problems in evolutionary dynamics. Fixation probability and Average Fixation Time (AFT) are two important parameters to quantify this ability. In this paper we introduce an analytical approach for exact calculation of AFT. Using this method we obtain AFT for two types of evolutionary graphs: cycle graph, as a highly homogeneous graph and star graph, as a highly heterogeneous graph. We use symmetries of these graphs to calculate AFT. Analytical results are confirmed with simulation. We also examine the effect of adding some random edges to each of these structures.
On Average Waiting Time in Shared Dynamic Spectrum Allocation
NASA Astrophysics Data System (ADS)
Qadeer Sharif, Muhammad; Fan, Pingzhi; Pan, Yi
Dynamic spectrum allocation is a promising approach toward optimum utilization of rarely used frequency bands. However, to dynamically share the spectrum as a Commons, there are challenges to overcome, such as tendency of greed in holding the spectrum. In particular, under heavy traffic load, a system may hold the spectrum more than its requirement to avoid delay in accessing the spectrum again. This kind of behavior can cause traffic blocking for all other systems participating in the sharing pool and will be more critical for time sensitive applications. Consequently, the awareness of the Average Waiting Time (AWT) and the factors controlling the AWT is crucial for a system to manage its time sensitive traffic. In this paper, analytical and simulation studies are performed to investigate the controlling factors for the AWT, which will be helpful for the systems to manage their time sensitive applications. To eliminate the impact of greed, Greed Elimination Model based on distributed coordination function with upper limit on transmission duration, is presented and analyzed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF STATE PLANS... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...
Turnaround radius in modified gravity
NASA Astrophysics Data System (ADS)
Faraoni, Valerio
2016-03-01
In an accelerating universe in General Relativity there is a maximum radius above which a shell of test particles cannot collapse, but is dispersed by the cosmic expansion. This radius could be used in conjunction with observations of large structures to constrain the equation of state of the universe. We extend the concept of turnaround radius to modified theories of gravity for which the gravitational slip is non-vanishing.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
Organized by the key conditions areas for turnaround, "People, Program, Time and Money," this tool offers sample language for each contract element to serve as a model for modifications from a traditional CBA that may support a district's turnaround efforts. Sample language is offered from existing provisions in district-wide collective bargainingâ€¦
Combining Quick-Turnaround and Batch Workloads at Scale
NASA Technical Reports Server (NTRS)
Matthews, Gregory A.
2012-01-01
NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.
Peru turnaround tied to privatization
Not Available
1993-04-19
Petroleos del Peru predicts a 10% increase in Peru's oil production this year and a further increase in 1994. Petroperu also forecasts a sharp increase in drilling in Peru this year. After several years of declining oil production, a gradual turnaround in the cash strapped country's petroleum industry is largely tied to its progress in privatization. The government last year began a campaign to privatize all state owned companies by the end of July 1995. The paper discusses forecasts by Petroperu; the contract of Occidental del Amazonas Inc.; the Petromar privatization; Great Western's contract; development of the Aguaytia gas field; and refinery contracts.
Series Overview. Sustaining School Turnaround at Scale. Brief 1
ERIC Educational Resources Information Center
Education Resource Strategies, 2012
2012-01-01
Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…
In Vivo T2 Relaxation Time Measurement with Echo-Time Averaging
Prescot, Andrew P.; Shi, Xianfeng; Choi, Changho; Renshaw, Perry. F.
2014-01-01
The accuracy of metabolite concentrations measured using in vivo proton (1H) magnetic resonance spectroscopy (MRS) is enhanced following correction for spin-spin (T2) relaxation effects. In addition, metabolite proton T2 relaxation times provide unique information regarding cellular environment and molecular mobility. Echo-time (TE) averaging 1H MRS involves the collection and averaging of multiple TE steps that greatly simplifies resulting spectra due to the attenuation of spin-coupled and macromolecule resonances. Given the simplified spectral appearance and inherent metabolite T2 relaxation information, the aim of the present proof-of-concept study was to develop a novel data processing scheme to estimate metabolite T2 relaxation times from TE-averaged 1H MRS data. Spectral simulations are used to validate the proposed TE-averaging methods for estimating methyl proton T2 relaxation times for N-acetyl aspartate, total creatine, and choline-containing compounds. The utility of the technique and its reproducibility are demonstrated using data obtained in vivo from the posterior-occipital cortex of ten healthy control subjects. Compared to standard methods, distinct advantages of this approach include built-in macromolecule resonance attenuation, in vivo T2 estimates closer to reported values when maximum TE ? T2, and the potential for T2 calculation of metabolite resonances otherwise inseparable in standard 1H MRS spectra recorded in vivo. PMID:24865447
Turnaround Schools and the Leadership They Require
ERIC Educational Resources Information Center
Leithwood, Kenneth; Strauss, Tiiu
2008-01-01
"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The studyâ€¦
Turnaround Schools and the Leadership They Require
ERIC Educational Resources Information Center
Leithwood, Kenneth; Strauss, Tiiu
2008-01-01
"Turnaround Schools and the Leadership They Require" was commissioned by the Canadian Education Association (CEA) in 2006. The research, conducted by Kenneth Leithwood and Tiiu Strauss, was a two-year study of Ontario schools identified as in need of "turnaround" meaning that a significant improvement in school achievement was required. The study…
On time-dependent perturbation theory in matrix mechanics and time averaging
NASA Astrophysics Data System (ADS)
Casas, Fernando
2015-09-01
The time-dependent quantum perturbation theory developed by Born, Heisenberg and Jordan in 1926 is revisited. We show that it not only reproduces the standard theory formulated in the interaction picture, but also allows one to construct more accurate approximations if time averaging techniques are employed. The theory can be rendered unitary even if the expansion is truncated by using a transformation previously suggested by Heisenberg. We illustrate the main features of the procedure on a simple example which clearly shows its advantages in comparison with the standard perturbation theory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... part, section 4.3, to calculate the daily geometric average concentrations of sulfur dioxide emissions... appendix A of this part, section 5.4, to determine the daily geometric average percent reduction...
Sustainment of Fine Particle Cloud by Means of Time-Averaged Particle Driving Force in Plasmas
Gohda, Takuma; Iizuka, Satoru
2008-09-07
We have succeeded in sustaining fine particle cloud by using a time-averaged particle driving (TAPD) method in the RF discharge plasma. The particles feel only time-averaged force when the period of pulses applied to those point-electrodes is shorter than the particle response time. The particles are transported to a middle point between two point-electrodes.
The consequences of time averaging for measuring temporal species turnover in the fossil record
NASA Astrophysics Data System (ADS)
TomaĹˇovĂ˝ch, Adam; Kidwell, Susan
2010-05-01
Modeling time averaging effects with simple simulations allows us to evaluate the magnitude of change in temporal species turnover that is expected to occur in long (paleoecological) time series with fossil assemblages. Distinguishing different modes of metacommunity dynamics (such as neutral, density-dependent, or trade-off dynamics) with time-averaged fossil assemblages requires scaling-up time-averaging effects because the decrease in temporal resolution and the decrease in temporal inter-sample separation (i.e., the two main effects of time averaging) substantially increase community stability relative to assemblages without or with weak time averaging. Large changes in temporal scale that cover centuries to millennia can lead to unprecedented effects on temporal rate of change in species composition. Temporal variation in species composition monotonically decreases with increasing duration of time-averaging in simulated fossil assemblages. Time averaging is also associated with the reduction of species dominance owing to the temporal switching in the identity of dominant species. High degrees of time averaging can cause that community parameters of local fossil assemblages converge to parameters of metacommunity rather that to parameters of individual local non-averaged communities. We find that the low variation in species composition observed among mollusk and ostracod subfossil assemblages can be explained by time averaging alone, and low temporal resolution and reduced temporal separation among assemblages in time series can thus explain a substantial part of the reduced variation in species composition relative to unscaled predictions of neutral model (i.e., species do not differ in birth, death, and immigration rates on per capita basis). The structure of time-averaged assemblages can thus provide important insights into processes that act over larger temporal scales, such as evolution of niches and dispersal, range-limit dynamics, taxon cycles, and speciation modes.
Some dynamic and time-averaged flow measurements in a turbine rig
NASA Astrophysics Data System (ADS)
Krause, L. N.; Fralick, G. C.
1980-01-01
Four types of sensors were used to make both dynamic and time-averaged flow measurements in a cold turbine rig to determine the magnitude of errors in time-averaged total-pressure measurement at a station 5 1/2 blade cords downstream from the rotor. The errors turned out to be negligible. The sensors and their intended use are discussed.
Unocal restarts Parachute Creek plant after turnaround
Not Available
1986-12-01
In late November, Unocal Corporation restarted its Parachute Creek shale oil project after a two-month turnaround, or overhaul operation. The plant is again operating at about 50% of its 10,000 barrel/day design capacity. This is the maximum rate which has been achieved for long periods of time. Unocal has been continuing to explore the possibility of adding a fluidized bed combustor to the retorting system. A decision on whether to proceed with building the FBC is expected to be made by July, 1987. The Colorado Mined Land Reclamation Board has allowed Unocal to make changes at Parachute Creek to better accommodate the handling of spent shale. These changes are described. Operating problems at the plant from start-up and modifications required are reviewed. Upgrading of the shale oil syncrude cannot be done at the Gary, Indiana refinery as planned. Current plans are to truck it to Utah, place it in a pipeline to Midland, Texas, then pipe it through Cushing, Oklahoma to the Unocal refinery in Lemont, Illinois.
Code of Federal Regulations, 2014 CFR
2014-07-01
... § 62.15390(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...
Code of Federal Regulations, 2013 CFR
2013-07-01
... § 62.15390(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...
Code of Federal Regulations, 2011 CFR
2011-07-01
... § 62.15390(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in Appendix A of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...
Code of Federal Regulations, 2012 CFR
2012-07-01
... § 62.15390(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... Combustion Units Constructed on or Before August 30, 1999 Continuous Emission Monitoring § 62.15210 How do...
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
Local volume-time averaged equations of motion for dispersed, turbulent, multiphase flows
Sha, W.T.; Slattery, J.C.
1980-11-01
In most flows of liquids and their vapors, the phases are dispersed randomly in both space and time. These dispersed flows can be described only statistically or in terms of averages. Local volume-time averaging is used here to derive a self-consistent set of equations governing momentum and energy transfer in dispersed, turbulent, multiphase flows. The empiricisms required for use with these equations are the subject of current research.
NASA Astrophysics Data System (ADS)
Gomez, Vicente; Ramirez, Patricio; Cervera, Javier; Nasir, Saima; Ali, Mubarak; Ensinger, Wolfgang; Mafe, Salvador
2015-02-01
The possibility of taking advantage of a fluctuating environment for energy and information transduction is a significant challenge in biological and artificial nanostructures. We demonstrate here directional electrical transduction from fluctuating external signals using a single nanopore of conical shape immersed in an ionic aqueous solution. To this end, we characterize experimentally the average output currents obtained by the electrical rectification of zero time-average input potentials. The transformation of external potential fluctuations into nonzero time-average responses using a single nanopore in liquid state is of fundamental significance for biology and nanophysics. This energy and information conversion constitutes also a significant step towards macroscopic scaling using multipore membranes.
Textiles, Tariffs, and Turnarounds: Profits Improved.
ERIC Educational Resources Information Center
Aronoff, Craig
1986-01-01
The U.S. textile industry may serve as a classic study on regeneration through market forces. The industry has recently made a turnaround in profits after having been recognized as an industry that was losing most of its profits to overseas producers. The reasons for the emerging strength of the industry is that it began to innovate after a…
Pre-K as School Turnaround Strategy
ERIC Educational Resources Information Center
Buenafe, Anthony Q.
2011-01-01
Both state and local school systems, recognizing the strong and growing evidence of pre-kindergarten's effectiveness in closing achievement gaps, are implementing early learning programs as part of their education reform efforts. In low-performing districts, pre-k has emerged as a promising turnaround strategy, reaching children before they become…
School Turnaround: Cristo Rey Boston High School Case Study
ERIC Educational Resources Information Center
Thielman, Jeff
2012-01-01
The mandates of the federal No Child Left Behind Law, including the threat of closing a school for underperformance, have led to multiple public school turnaround attempts. Because turnaround is a relatively new area of focus in education, there is limited research on what does and does not work, and even the definition of turnaround is a work in…
A rapid procedure for obtaining time-average interferograms of vibrating bodies
NASA Astrophysics Data System (ADS)
Rapoport, Eliezer; Bar, Doron; Shiloh, Klara
1991-04-01
A method for vibrational analysis by time-average holographic interferometry was developed, based on conventional holography, and especially suitable for laboratories not equipped with ESPI system. It overcomes the main drawback of conventional time-average rocording: the inhibitively cumbersome procedure which requires recording a new hologram for each set of vibrational parameters. Only one holographic recording of the tested object at rest is needed, and image processing techniques are used to construct time-average holographic pictures of the vibrating body over any desired frequency (and amplitude) range. The method is simple, almost as rapid as ESPI (approaching real time), yielding pictures of good quality and fine details with resolution limited by that of the TV camera used. The method was applied to, and is demonstrated in the determination of vibrational modes of jet-engine blades.
Average optimality for continuous-time Markov decision processes with a policy iteration approach
NASA Astrophysics Data System (ADS)
Zhu, Quanxin
2008-03-01
This paper deals with the average expected reward criterion for continuous-time Markov decision processes in general state and action spaces. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We give conditions on the system's primitive data and under which we prove the existence of the average reward optimality equation and an average optimal stationary policy. Also, under our conditions we ensure the existence of [epsilon]-average optimal stationary policies. Moreover, we study some properties of average optimal stationary policies. We not only establish another average optimality equation on an average optimal stationary policy, but also present an interesting "martingale characterization" of such a policy. The approach provided in this paper is based on the policy iteration algorithm. It should be noted that our way is rather different from both the usually "vanishing discounting factor approach" and the "optimality inequality approach" widely used in the previous literature.
Neural Networks Used to Compare Designed and Measured Time-Average Patterns
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1999-01-01
Electronic time-average holograms are convenient for comparing the measured vibration modes of fan blades with those calculated by finite-element models. At the NASA Lewis Research Center, neural networks recently were trained to perform what had been a simple visual comparison of the predictions of the design models with the measurements. Finite-element models were used to train neural networks to recognize damage and strain information encoded in subtle changes in the time-average patterns of cantilevers. But the design-grade finite element models were unable to train the neural networks to detect damage in complex blade shapes. The design-model-generated patterns simply did not agree well enough with the measured patterns. Instead, hybrid-training records, with measured time-average patterns as the input and model-generated strain information as the output, were used to effect successful training.
Time domain averaging and correlation-based improved spectrum sensing method for cognitive radio
NASA Astrophysics Data System (ADS)
Li, Shenghong; Bi, Guoan
2014-12-01
Based on the combination of time domain averaging and correlation, we propose an effective time domain averaging and correlation-based spectrum sensing (TDA-C-SS) method used in very low signal-to-noise ratio (SNR) environments. With the assumption that the received signals from the primary users are deterministic, the proposed TDA-C-SS method processes the received samples by a time averaging operation to improve the SNR. Correlation operation is then performed with a correlation matrix to determine the existence of the primary signal in the received samples. The TDA-C-SS method does not need any prior information on the received samples and the associated noise power to achieve improved sensing performance. Simulation results are presented to show the effectiveness of the proposed TDA-C-SS method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on...(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2012 CFR
2012-07-01
... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on...(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2011 CFR
2011-07-01
... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on...(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
Code of Federal Regulations, 2013 CFR
2013-07-01
... SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste Combustion Units Constructed on...(a) to calculate emissions at 7 percent oxygen. (b) Use EPA Reference Method 19 in appendix A of this... potential sulfur dioxide emissions. (c) If you operate a Class I municipal waste combustion unit, use...
BĂĽrgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-01-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor F â‰ 0.32 Â± 0.08 for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo
2014-08-01
Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels. PMID:25105551
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? 60.2943 Section 60.2943 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Operator Training...
Lorenzana, Roseanne M; Troast, Richard; Klotzbach, Julie M; Follansbee, Mark H; Diamond, Gary L
2005-02-01
Typical exposures to lead often involve a mix of long-term exposures to relatively constant exposure levels (e.g., residential yard soil and indoor dust) and highly intermittent exposures at other locations (e.g., seasonal recreational visits to a park). These types of exposures can be expected to result in blood lead concentrations that vary on a temporal scale with the intermittent exposure pattern. Prediction of short-term (or seasonal) blood lead concentrations arising from highly variable intermittent exposures requires a model that can reliably simulate lead exposures and biokinetics on a temporal scale that matches that of the exposure events of interest. If exposure model averaging times (EMATs) of the model exceed the shortest exposure duration that characterizes the intermittent exposure, uncertainties will be introduced into risk estimates because the exposure concentration used as input to the model must be time averaged to account for the intermittent nature of the exposure. We have used simulation as a means of determining the potential magnitude of these uncertainties. Simulations using models having various EMATs have allowed exploration of the strengths and weaknesses of various approaches to time averaging of exposures and impact on risk estimates associated with intermittent exposures to lead in soil. The International Commission of Radiological Protection (ICRP) model of lead pharmacokinetics in humans simulates lead intakes that can vary in intensity over time spans as small as one day, allowing for the simulation of intermittent exposures to lead as a series of discrete daily exposure events. The ICRP model was used to compare the outcomes (blood lead concentration) of various time-averaging adjustments for approximating the time-averaged intake of lead associated with various intermittent exposure patterns. Results of these analyses suggest that standard approaches to time averaging (e.g., U.S. EPA) that estimate the long-term daily exposure concentration can, in some cases, result in substantial underprediction of short-term variations in blood lead concentrations when used in models that operate with EMATs exceeding the shortest exposure duration that characterizes the intermittent exposure. Alternative time-averaging approaches recommended for use in lead risk assessment more reliably predict short-term periodic (e.g., seasonal) elevations in blood lead concentration that might result from intermittent exposures. In general, risk estimates will be improved by simulation on shorter time scales that more closely approximate the actual temporal dynamics of the exposure. PMID:15787766
The Impact of Overassignment on Grade Point Averages of First-Time Freshmen.
ERIC Educational Resources Information Center
Desler, Mary; North, Gary
1978-01-01
Data show that the first-time freshmen, who were overassigned in residence halls, did no worse than other students concerning grade point average. The peer pressure may have been a motivating force. There was no male/female difference. Data also show no difference in environment perception. (LPG)
Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.
ERIC Educational Resources Information Center
Gullickson, Arlen R.; And Others
This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…
An upper bound to time-averaged space-charge limited diode currents
Griswold, M. E.; Fisch, N. J.; Wurtele, J. S.
2010-11-15
The Child-Langmuir law limits the steady-state current density across a one-dimensional planar diode. While it is known that the peak current density can surpass this limit when the boundary conditions vary in time, it remains an open question of whether the average current can violate the Child-Langmuir limit under time-dependent conditions. For the case where the applied voltage is constant but the electric field at the cathode is allowed to vary in time, one-dimensional particle-in-cell simulations suggest that such a violation is impossible. Although a formal proof is not given, an upper bound on the time-averaged current density is offered.
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
Testing Î›CDM cosmology at turnaround: where to look for violations of the bound?
NASA Astrophysics Data System (ADS)
Tanoglidis, D.; Pavlidou, V.; Tomaras, T. N.
2015-12-01
In Î›CDM cosmology, structure formation is halted shortly after dark energy dominates the mass/energy budget of the Universe. A manifestation of this effect is that in such a cosmology the turnaround radiusâ€”the non-expanding mass shell furthest away from the center of a structureâ€” has an upper bound. Recently, a new, local, test for the existence of dark energy in the form of a cosmological constant was proposed based on this turnaround bound. Before designing an experiment that, through high-precision determination of masses and â€”independentlyâ€” turnaround radii, will challenge Î›CDM cosmology, we have to answer two important questions: first, when turnaround-scale structures are predicted to be close enough to their maximum size, so that a possible violation of the bound may be observable. Second, which is the best mass scale to target for possible violations of the bound. These are the questions we address in the present work. Using the Press-Schechter formalism, we find that turnaround structures have in practice already stopped forming, and consequently, the turnaround radius of structures must be very close to the maximum value today. We also find that the mass scale of ~ 1013 Msolar characterizes the turnaround structures that start to form in a statistically important number density today â€”and even at an infinite time in the future, since structure formation has almost stopped. This mass scale also separates turnaround structures with qualitatively different cosmological evolution: smaller structures are no longer readjusting their mass distribution inside the turnaround scale, they asymptotically approach their ultimate abundance from higher values, and they are common enough to have, at some epoch, experienced major mergers with structures of comparable mass; larger structures exhibit the opposite behavior. We call this mass scale the transitional mass scale and we argue that it is the optimal for the purpose outlined above. As a corollary, we explain the different accretion behavior of small and larger structures observed in already conducted numerical simulations.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Real-time fluid transport characterization through direct acquisition of the averaged propagator
NASA Astrophysics Data System (ADS)
Kittler, Wilfred; Hunter, Mark; Galvosas, Petrik
2015-08-01
Brownian motion (diffusion) and coherent flow are fundamental for many natural and artificial processes. Therefore, its accurate measurement and description is highly desirable in many areas of science, engineering, and technology. Currently available methods commonly involve multiple experiments and substantial processing of acquired data. This contribution proposes a theoretical and experimental framework that enables one to directly examine the dynamics of fluid matter subject to diffusion and flow through the acquisition of the so-called averaged propagator. This statistical function holds all information on particle mobility due to flow and diffusion averaged over the observed fluid. The proposed method is based on a single instantaneous nuclear magnetic resonance measurement event. It also removes the need for data postprocessing by capturing the averaged propagator directly as the acquired signal, which enables the monitoring of diffusion and flow in real time.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.
NASA Technical Reports Server (NTRS)
Decker, A. J.
1984-01-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three-dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed. Previously announced in STAR as N84-21849
Decker, A.J.
1984-06-01
The holographic recording of the time history of a flow feature in three dimensions is discussed. The use of diffuse illumination holographic interferometry or the three dimensional visualization of flow features such as shock waves and turbulent eddies is described. The double-exposure and time-average methods are compared using the characteristic function and the results from a flow simulator. A time history requires a large hologram recording rate. Results of holographic cinematography of the shock waves in a flutter cascade are presented as an example. Future directions of this effort, including the availability and development of suitable lasers, are discussed.
Convergence of Time Averages of Weak Solutions of the Three-Dimensional Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Foias, Ciprian; Rosa, Ricardo M. S.; Temam, Roger M.
2015-08-01
Using the concept of stationary statistical solution, which generalizes the notion of invariant measure, it is proved that, in a suitable sense, time averages of almost every Leray-Hopf weak solution of the three-dimensional incompressible Navier-Stokes equations converge as the averaging time goes to infinity. This system of equations is not known to be globally well-posed, and the above result answers a long-standing problem, extending to this system a classical result from ergodic theory. It is also shown that, from a measure-theoretic point of view, the stationary statistical solution obtained from a generalized limit of time averages is independent of the choice of the generalized limit. Finally, any Borel subset of the phase space with positive measure with respect to a stationary statistical solution is such that for almost all initial conditions in that Borel set and for at least one Leray-Hopf weak solution starting with that initial condition, the corresponding orbit is recurrent to that Borel subset and its mean sojourn time within that Borel subset is strictly positive.
A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.
Long, Lijun; Zhao, Jun
2015-07-01
This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed. PMID:25122844
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Long-term aerosol particle flux observations part I: Uncertainties and time-average statistics
NASA Astrophysics Data System (ADS)
Rannik, Ü.; Mammarella, I.; Aalto, P.; Keronen, P.; Vesala, T.; Kulmala, M.
Long-term eddy covariance particle flux measurements for the size range starting from 10 nm were performed at a boreal forest site in Southern Finland. The large variability in turbulent flux estimates is inherent to the particle flux observations and thus long-term particle flux measurements enable to obtain statistically significant results by a suitable averaging. The particle flux random errors were estimated and a parameterisation for the integral time scale of turbulent flux was proposed. Application of flux errors for classification according to statistical significance of single flux values leads to systematically different deposition estimates on ensemble average basis. This must be avoided for determination of unbiased average deposition fluxes. The role of storage term in particle deposition evaluation was analysed. It was empirically determined that the method of storage term estimation discussed by [Finnigan, J., 2006. The storage term in eddy flux calculations. Agric. Forest Meteorol., 136, 108-113.] is not sensitive to the selection of the concentration averaging window in both ends of the flux averaging period. It is argued that the storage change in real atmospheric conditions results from boundary layer development as well as source-sink activity and therefore the filtering effect arising from averaging the concentration is of less importance. Diurnal, seasonal and annual variability of particle fluxes was analysed and it was observed that particle deposition rates are higher in winter. More detailed analysis of functional dependencies of particle deposition on environmental factors as well as dependence on size will be done in the second part of the paper.
Leading a supply chain turnaround.
Slone, Reuben E
2004-10-01
Just five years ago, salespeople at Whirlpool were in the habit of referring to their supply chain organization as the "sales disablers." Now the company excels at getting products to the right place at the right time--while managing to keep inventories low. How did that happen? In this first-person account, Reuben Slone, Whirlpool's vice president of Global Supply Chain, describes how he and his colleagues devised the right supply chain strategy, sold it internally, and implemented it. Slone insisted that the right focal point for the strategy was the satisfaction of consumers at the end of the supply chain. Most supply chain initiatives do the opposite: They start with the realities of a company's manufacturing base and proceed from there. Through a series of interviews with trade customers large and small, his team identified 27 different capabilities that drove industry perceptions of Whirlpool's performance. Knowing it was infeasible to aim for world-class performance across all of them, Slone weighed the costs of excelling at each and found the combination of initiatives that would provide overall competitive advantage. A highly disciplined project management office and broad training in project management were key to keeping work on budget and on benefit. Slone set an intense pace--three "releases" of new capabilities every month--that the group maintains to this day. Lest this seem like a technology story, however, Slone insists it is just as much a "talent renaissance." People are proud today to be part of Whirlpool's supply chain organization, and its new generation of talent will give the company a competitive advantage for years to come. PMID:15559580
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2016-03-01
A mixed semiclassical initial value representation expression for spectroscopic calculations is derived. The formulation takes advantage of the time-averaging filtering and the hierarchical properties of different trajectory based propagation methods. A separable approximation is then introduced that greatly reduces (about an order of magnitude) the computational cost compared with a full Herman-Kluk time-averaging semiclassical calculation for the same systems. The expression is exact for the harmonic case and it is tested numerically for a Morse potential coupled to one or two additional harmonic degrees of freedom. Results are compared to full Herman-Kluk time-averaging calculations and exact quantum wavepacket propagations. We found the peak positions of the mixed semiclassical approximations to be always in very good agreement with full quantum calculations, while overtone peak intensities are lower with respect to the exact ones. Given the reduced computational effort required by this new mixed semiclassical approximation, we believe the present method to make spectroscopic calculations available for higher dimensional systems than accessible before.
Digital shearography using stroboscopic illumination in addition to time average method
NASA Astrophysics Data System (ADS)
Steinchen, Wolfgang; Gan, Ymin; Kupfer, Gerhard; Maeckel, Peter
2004-06-01
One of the most interesting and useful applications of shearographic interferometry is the detection, visualization and measurement of the mechanical vibration of opaque objects. Until now the time-average shearography is a qualitative interferometric method for determining the oscillating loadings. The detected gradient of the deformation can be determined by changing the shearing distance. The fringes of the moving object are often faded and become clearer by filtering with FFT and against an uniform background intensity. The fringes formed in time-average shearography of sinusoidal motions have an irradiance described by the Bessel function J02. Quantitative interpretation of the shearogram requires a more precise analysis. Such a technique for extending or decreasing the sensitivity of vibration measurements and for determining the relative phase of vibration across the object surface is the stroboscopic illumination. Stroboscopic shearographic interferometry is a technique which compensates the deficiencies of time-average shearography at the expense of some increase in experimental complexity. However more complex is the recording of stroboscopic shearograms by using two pulses from a double-pulse laser.
On the construction of a time base and the elimination of averaging errors in proxy records
NASA Astrophysics Data System (ADS)
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Î” â« n+12Î”Î´- y(n,Î¸) = Î´- 1Î”- y(m,Î¸)dm n-2 Î´ where m is the position, x(m) = Î”m; Î¸ are the unknown parameters and y(m,Î¸) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, Î¸) = A +â‘H [A sin(kĎ‰t(m ))+ A cos(kĎ‰t(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: â‘ g(m ) = b blĎ†l(m ) l=1 where, b is a vector of unknown time base distortion parameters, and Ď† is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.
Manned Orbital Transfer Vehicle (MOTV). Volume 5: Turnaround analysis
NASA Technical Reports Server (NTRS)
Boyland, R. E.; Sherman, S. W.; Morfin, H. W.
1979-01-01
The development of a low-cost reliable turnaround process to employ the MOTV in enhancing the utilization of the geosynchronous space region is analyzed. It is indicated that a routine effective turnaround/maintenance plan must make maximum use of flight data for maintenance planning, a high degree of test automation, and MOTV maintainability features in order to minimize tests, facilitate repair, and reduce manpower requirements. An effective turnaround plan provides a payback of reduced risks.
Organizational turnaround: the role of the nurse executive.
Burritt, Joan E
2005-11-01
For some hospitals, the complex issues associated with the current healthcare environment have resulted in progressive destabilization. In these situations, organizational turnaround is imperative to renew profitability and remain viable. As the administrator of the largest employee group, the nurse executive plays a central role in the turnaround process. The collective power and strength of the nursing service can move an organization by accomplishing needed changes and setting a tone and example for others. The author discusses the organization's turnaround. PMID:16282825
Luque-Vasquez, Fernando Minjarez-Sosa, J. Adolfo Rosas-Rosas, Luz del Carmen
2010-06-15
This paper deals with a class of semi-Markov control models with Borel state and control spaces, possibly unbounded costs, and unknown holding times distribution F. Assuming that F does not depend on state-action pairs, we combine suitable methods of statistical estimation of the mean holding time with control procedures to construct an average cost optimal Markovian policy {pi}-hat={l_brace}f{sub n}{r_brace}, and an optimal stationary policy {l_brace}f{sub {infinity}}{r_brace}, where f{sub n} converges to f{sub {infinity}} in the sense of Schael.
TIME-AVERAGE-BASED METHODS FOR MULTI-ANGULAR SCALE ANALYSIS OF COSMIC-RAY DATA
Iuppa, R.; Di Sciascio, G. E-mail: giuseppe.disciascio@roma2.infn.it
2013-04-01
Over the past decade, a number of experiments dealt with the problem of measuring the arrival direction distribution of cosmic rays, looking for information on the propagation mechanisms and the identification of their sources. Any deviation from the isotropy may be regarded to as a signature of unforeseen or unknown phenomena, mostly if well localized in the sky and occurring at low rigidity. It induced experimenters to search for excesses down to angular scales as narrow as 10 Degree-Sign , disclosing the issue of properly filtering contributions from wider structures. A solution commonly envisaged was based on time-average methods to determine the reference value of cosmic-ray flux. Such techniques are nearly insensitive to signals wider than the time window in use, thus allowing us to focus the analysis on medium- and small-scale signals. Nonetheless, the signal often cannot be excluded in the calculation of the reference value, which induces systematic errors. The use of time-average methods recently revealed important discoveries about the medium-scale cosmic-ray anisotropy, present both in the northern and southern hemispheres. It is known that the excess (or deficit) is observed as less intense than in reality and that fake deficit zones are rendered around true excesses because of the absolute lack of knowledge a priori of which signal is true and which is not. This work is an attempt to critically review the use of time-average-based methods for observing extended features in the cosmic-ray arrival distribution pattern.
Code of Federal Regulations, 2011 CFR
2011-07-01
... SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which Construction is... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen... Class I municipal waste combustion unit, use EPA Reference Method 19 in appendix A of this part,...
Code of Federal Regulations, 2012 CFR
2012-07-01
... SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which Construction is... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen... Class I municipal waste combustion unit, use EPA Reference Method 19 in appendix A of this part,...
Code of Federal Regulations, 2014 CFR
2014-07-01
... SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which Construction is... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen... Class I municipal waste combustion unit, use EPA Reference Method 19 in appendix A of this part,...
Code of Federal Regulations, 2013 CFR
2013-07-01
... SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which Construction is... averaging times and units? (a) Use the equation in § 60.1460(a) to calculate emissions at 7 percent oxygen... Class I municipal waste combustion unit, use EPA Reference Method 19 in appendix A of this part,...
A real-time QRS detection method based on moving-averaging incorporating with wavelet denoising.
Chen, Szi-Wen; Chen, Hsiao-Chen; Chan, Hsiao-Lung
2006-06-01
In this paper, a simple moving average-based computing method for real-time QRS detection is proposed. In addition, for signal preprocessing our detection algorithm also incorporates a wavelet-based denoising procedure to effectively reduce the noise level for electrocardiogram (ECG) data. The overall computational structure of the proposed algorithm allows the QRS detection to be performed and implemented in real-time with high time- and memory-efficiency. Algorithm performance was evaluated against the MIT-BIH Arrhythmia Database. The numerical results indicated that the novel algorithm finally achieved about 99.5% of the detection rate for the standard database, and also, it could function reliably even under the condition of poor signal quality in the measured ECG data. PMID:16716445
NASA Astrophysics Data System (ADS)
Young, Christopher; Lucca Fabris, Andrea; Gascon, Nicolas; Cappelli, Mark
2014-10-01
An extensive experimental campaign characterizes a 70 mm diameter stationary plasma thruster operating on xenon in the 200--500 W power range. This study resolves both time-averaged properties and oscillatory phenomena in the plasma discharge. Specifically, we explore the time variation of the plume ion velocity field referenced to periodic discharge current oscillations using time-synchronized laser induced fluorescence (LIF) measurements. This LIF scheme relies on a triggered signal acquisition gate locked at a given phase of the current oscillation period. The laser is modulated at a characteristic frequency and homodyne detection through a lock-in amplifier extracts the induced fluorescence signal out of the bright background emission. This work is sponsored by the U.S. Air Force Office of Scientific Research with Dr. Mitat Birkan as program manager. CVY acknowledges support from the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.
An analytical model for time-averaged ion energy distributions in collisional rf sheaths
NASA Astrophysics Data System (ADS)
Chen, Wen-Cong; Pu, Yi-Kang
2014-08-01
An analytical model is proposed for time-averaged ion energy distributions (IEDs) in collisional rf sheaths (?i < sm, where ?i is the ion mean free path and sm is the sheath thickness), in which charge transfer is the dominant ion-neutral collision mechanism. Our model is different from the model in Israel et al 2006 J. Appl. Phys. 99 093303 in two aspects. Firstly, to calculate the ion flux, we consider ions created in both the space charge region and the quasi-neutral region instead of only the quasi-neutral region. Secondly, to calculate the ion energy, we use an rf-modulated transit time instead of only the average transit time. Consequently, our model is valid over a wider pressure range (from a weakly collisional sheath to a strongly collisional sheath) compared with the model in Israel et al 2006 J. Appl. Phys. 99 093303. Our model shows that, in a collisional rf sheath (argon, 5 Pa, 27.12 MHz and 100 W), 65% of the ion flux in the IED comes from secondary ions created in the space charge region. Results of our model are compared with those obtained by measurement, direct integration method and particle-in-cell/Monte Carlo collision simulation.
A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.
1992-01-01
A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.
Average discharge rate representation of voice onset time in the chinchilla auditory nerve
Sinex, D.G.; McDonald, L.P.
1988-05-01
Responses of chinchilla auditory-nerve fibers to synthesized stop consonants differing in voice onset time (VOT) were obtained. The syllables, heard as /ga/--/ka/ or /da/--/ta/, were similar to those previously used by others in psychophysical experiments with human and with chinchilla subjects. Average discharge rates of neurons tuned to the frequency region near the first formant generally increased at the onset of voicing, for VOTs longer than 20 ms. These rate increases were closely related to spectral amplitude changes associated with the onset of voicing and with the activation of the first formant; as a result, they provided accurate information about VOT. Neurons tuned to frequency regions near the second and third formants did not encode VOT in their average discharge rates. Modulations in the average rates of these neurons reflected spectral variations that were independent of VOT. The results are compared to other measurements of the peripheral encoding of speech sounds and to psychophysical observations suggesting that syllables with large variations in VOT are heard as belonging to one of only two phonemic categories.
Time-averaged fluxes of lead and fallout radionuclides to sediments in Florida Bay
Robbins, J.A.; Holmes, C.; Halley, R.; Bothner, M.; Shinn, E.; Graney, J.; Keeler, G.; TenBrink, M.; Orlandini, K.A.; Rudnick, D.
2000-01-01
Recent, unmixed sediments from mud banks of central Florida Bay were dated using 210Pb/226Ra, and chronologies were verified by comparing sediment lead temporal records with Pb/Ca ratios in annual layers of coral (Montastrea annularis) located on the ocean side of the Florida Keys. Dates of sediment lead peaks (1978 ?? 2) accord with prior observations of a 6 year lag between the occurrence of maximum atmospheric lead in 1972 and peak coral lead in 1978. Smaller lags of 1-2 years occur between the maximum atmospheric radionuclide fallout and peaks in sediment temporal records of 137Cs and Pu. Such lags are consequences of system time averaging (STA) in which atmospherically delivered particle-associated constituents accumulate and mix in a (sedimentary?) reservoir before transferring to permanent sediments and coral. STA model calculations, using time-dependent atmospheric inputs, produced optimized profiles in excellent accord with measured sediment 137Cs, Pu, lead, and coral lead distributions. Derived residence times of these particle tracers (16 ?? 1, 15.7 ?? 0.7, 19 ?? 3, and 16 ?? 2 years, respectively) are comparable despite differences in sampling locations, in accumulating media, and in element loading histories and geochemical properties. For a 16 year weighted mean residence time, STA generates the observed 6 year lead peak lag. Evidently, significant levels of nondegradable, particle-associated contaminants can persist in Florida Bay for many decades following elimination of external inputs. Present results, in combination with STA model analysis of previously reported radionuclide profiles, suggest that decade-scale time averaging may occur widely in recent coastal marine sedimentary environments. Copyright 2000 by the American Geophysical Union.
Geomagnetic field model for the last 5 My: time-averaged field and secular variation
NASA Astrophysics Data System (ADS)
Hatakeyama, Tadahiro; Kono, Masaru
2002-11-01
Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV model shows large variance of the (2,1) component, which is in good agreement with the previous PSV models obtained by forward approaches. It is also indicated that the variance of the axial dipole term is very small. This is in conflict with the studies based on paleointensity data, but we show that this conclusion is not inconsistent with the paleointensity data because a substantial part of the apparent scatter in paleointensities may be attributable to effects other than the fluctuations in g10 itself.
Time-averaged molluscan death assemblages: Palimpsests of richness, snapshots of abundance
NASA Astrophysics Data System (ADS)
Kidwell, Susan M.
2002-09-01
Field tests that compare living communities to associated dead remains are the primary means of estimating the reliability of biological information in the fossil record; such tests also provide insights into the dynamics of skeletal accumulation. Contrary to expectations, molluscan death assemblages capture a strong signal of living species' rank-order abundances. This finding, combined with independent evidence for exponential postmortem destruction of dead cohorts, argues that, although the species richness of a death assemblage may be a time-averaged palimpsest of the habitat (molluscan death assemblages contain, on average, Ëś25% more species than any single census of the local live community, after sample-size standardization), species' relative-abundance data from the same assemblage probably constitute a much higher acuity record dominated by the most recent dead cohorts (e.g., from the past few hundred years or so, rather than the several thousand years recorded by the total assemblage and usually taken as the acuity of species-richness information). The pervasive excess species richness of molluscan death assemblages requires further analysis and modeling to discriminate among possible sources. However, time averaging alone cannot be responsible unless rare species (species with low rates of dead-shell production) are collectively more durable (have longer taphonomic half-lives) than abundant species. Species richness and abundance data thus appear to present fundamentally different taphonomic qualities for paleobiological analysis. Relative- abundance information is more snapshot-like and thus taphonomically more straightforward than expected, especially compared to the complex origins of dead-species richness.
Calculations of the time-averaged local heat transfer coefficients in circulating fluidized bed
Dai, T.H.; Qian, R.Z.; Ai, Y.F.
1999-04-01
The great potential to burn a wide variety of fuels and the reduced emission of pollutant gases mainly SO{sub x} and NO{sub x} have inspired the investigators to conduct research at a brisk pace all around the world on circulating fluidized bed (CFB) technology. An accurate understanding of heat transfer to bed walls is required for proper design of CFB boilers. To develop an optimum economic design of the boiler, it is also necessary to know how the heat transfer coefficient depends on different design and operating parameters. It is impossible to do the experiments under all operating conditions. Thus, the mathematical model prediction is a valuable method instead. Based on the cluster renewal theory of heat transfer in circulating fluidized beds, a mathematical model for predicting the time-averaged local bed-to-wall heat transfer coefficients is developed. The effects of the axial distribution of the bed density on the time-average local heat transfer coefficients are taken into account via dividing the bed into a series of sections along its height. The assumptions are made about the formation and falling process of clusters on the wall. The model predictions are in an acceptable agreement with the published data.
Enhancing MALDI Time-Of-Flight Mass Spectrometer Performance through Spectrum Averaging
Mitchell, Morgan; Mali, Sujina; King, Charles C.; Bark, Steven J.
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30â€“50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
Enhancing MALDI time-of-flight mass spectrometer performance through spectrum averaging.
Mitchell, Morgan; Mali, Sujina; King, Charles C; Bark, Steven J
2015-01-01
Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometers are simple and robust mass spectrometers used for analysis of biologically relevant molecules in diverse fields including pathogen identification, imaging mass spectrometry, and natural products chemistry. Despite high nominal resolution and accuracy, we have observed significant variability where 30-50% of individual replicate measurements have errors in excess of 5 parts-per-million, even when using 5-point internal calibration. Increasing the number of laser shots for each spectrum did not resolve this observed variability. What is responsible for our observed variation? Using a modern MALDI-TOF/TOF instrument, we evaluated contributions to variability. Our data suggest a major component of variability is binning of the raw flight time data by the electronics and clock speed of the analog-to-digital (AD) detection system, which requires interpolation by automated peak fitting algorithms and impacts both calibration and the observed mass spectrum. Importantly, the variation observed is predominantly normal in distribution, which implies multiple components contribute to the observed variation and suggests a method to mitigate this variability through spectrum averaging. Restarting the acquisition impacts each spectrum within the electronic error of the AD detector system and defines a new calibration function. Therefore, averaging multiple independent spectra and not a larger number of laser shots leverages this inherent binning error to mitigate variability in accurate MALDI-TOF mass measurements. PMID:25798583
Active microelement testing by interferometry using time-average and quasi-stroboscopic techniques
NASA Astrophysics Data System (ADS)
Salbut, Leszek A.; Patorski, Krzysztof; Jozwik, Michal; Kacperski, Jacek M.; Gorecki, Christophe; Jacobelli, Alain; Dean, Thierry
2003-10-01
Increasing technological capabilities to produce active microelements (incl. microbeams, micromembranes and micromirrors) and their expanding areas of application introduce unprecedented requirements concerning their design and testing. The paper presents a concept of an optical measurement system and methodology for out-of-plane displacement testing of such active microelements. The system is based on Twyman-Green microinterferometer. It gives the possibility to combine the capabilities of time average and quasi-stroboscopic interferometry methods to find dynamic behavior of active microelements (e.g., resonance frequencies and amplitude distributions in vibration modes). For mapping the zero-order Bessel function modulating the contrast of two-beam interference fringes the four-frame technique is applied. The calibration of the contrast variation in time-averaged interferograms enables quantitative evaluation of the vibration amplitude encoded in the argument of the Bessel function. For qualitative estimation of the vibration amplitude sign a simple quasi-stroboscopic technique is proposed. In this technique, laser pulses have the same frequency as the signal activating the microelement under test. This self-synchronous system enables to visualize the shape of the tested element at maximum deflection. Exemplary results of measurements performed with active micromembranes are presented.
NASA Astrophysics Data System (ADS)
Basu, Santasri; McCrae, Jack E.; Fiorino, Steven T.
2015-05-01
A time-lapse imaging experiment was conducted to monitor the effects of the atmosphere over some period of time. A tripod-mounted digital camera captured images of a distant building every minute. Correlation techniques were used to calculate the position shifts between the images. Two factors causing shifts between the images are: atmospheric turbulence, causing the images to move randomly and quickly, plus changes in the average refractive index gradient along the path which cause the images to move vertically, more slowly and perhaps in noticeable correlation with solar heating and other weather conditions. A technique for estimating the path-averaged C 2n from the random component of the image motion is presented here. The technique uses a derived set of weighting functions that depend on the size of the imaging aperture and the patch size in the image whose motion is being tracked. Since this technique is phase based, it can be applied to strong turbulence paths where traditional irradiance based techniques suffer from saturation effects.
Time-averaged and instantaneous porosity distribution in a fluidized bed with an immersed obstacle
Lyczkowski, R.W.; Bouillard, J.X.; Gidaspow, D.; Berry, G.F.
1990-02-01
Erosion in bubbling fluidized-bed combustors is a serious issue that may affect their reliability and economics. Available evidence suggests that the key to understanding this erosion is detailed knowledge of the coupled and complex phenomena of solids circulation and bubble motion. A thin, transparent two-dimensional'' rectangular fluidized bed with an obstacle served as a rough model for a fluidized-bed combustor. This model was studied experimentally and computationally, using two hydrodynamic equation sets. The computed hydrodynamic results agree reasonably well with experimental data. Bubble frequencies and sizes compare well with those obtained from analyzing a high speed motion picture frame-by-frame. Time-averaged porosities computed from both models agree with time-averaged porosity distributions measured with a gamma-ray densitometer. The principal differences between the data and the computations in this paper result from asymmetries present in the experiment and from the simplified solids rheology used in the hydrodynamic models. 53 refs., 26 figs., 7 tabs.
Time-averaged current analysis of a thunderstorm using ground-based measurements
NASA Technical Reports Server (NTRS)
Driscoll, Kevin T.; Blakeslee, Richard J.; Koshak, William J.
1994-01-01
The amount of upward current provided to the ionosphere by a thunderstorm that appeared over the Kennedy Space Center (KSC) on July 11, 1978, is reexamined using an analytic equation that describes a bipolar thunderstorm's current contribution to the global circuit in terms of its generator current, lightning currents, the altitudes of its charge centers, and the conductivity profile of the atmosphere. Ground-based measurements, which were obtained from a network of electric field mills positioned at various distances from the thunderstorm, were used to characterize the electrical activity inside the thundercloud. The location of the lightning discharges, the type of lightning, and the amount of charge neutralized during this thunderstorm were computed through a least squares inversion of the measured changes in the electric fields following each lightning discharge. These measurements provided the information necessary to implement the analytic equation, and consequently, a time-averaged estimate of this thunderstorm's current contribution to the global circuit was calculated. From these results the amount of conduction current supplied to the ionosphere by this small thunderstorm was computed to be less than 25% of the time-averaged generator current that flowed between the two vertically displaced charge centers.
NASA Astrophysics Data System (ADS)
Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho
2016-03-01
We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.
Boseâ€“Einstein condensation in large time-averaged optical ring potentials
NASA Astrophysics Data System (ADS)
Bell, Thomas A.; Glidden, Jake A. P.; Humbert, Leif; Bromley, Michael W. J.; Haine, Simon A.; Davis, Matthew J.; Neely, Tyler W.; Baker, Mark A.; Rubinsztein-Dunlop, Halina
2016-03-01
Interferometric measurements with matter waves are established techniques for sensitive gravimetry, rotation sensing, and measurement of surface interactions, but compact interferometers will require techniques based on trapped geometries. In a step towards the realisation of matter wave interferometers in toroidal geometries, we produce a large, smooth ring trap for Boseâ€“Einstein condensates using rapidly scanned time-averaged dipole potentials. The trap potential is smoothed by using the atom distribution as input to an optical intensity correction algorithm. Smooth rings with a diameter up to 300 ÎĽm are demonstrated. We experimentally observe and simulate the dispersion of condensed atoms in the resulting potential, with good agreement serving as an indication of trap smoothness. Under time of flight expansion we observe low energy excitations in the ring, which serves to constrain the lower frequency limit of the scanned potential technique. The resulting ring potential will have applications as a waveguide for atom interferometry and studies of superfluidity.
Damage Detection in Plate Structures Based on Space-time Autoregressive Moving Average Processes
NASA Astrophysics Data System (ADS)
Hu, X. T.; Qin, Z. Y.; Chu, F. L.
2011-07-01
This paper presents a method for damage detection in plate structures based on space-time series analysis. The plate is divided into several spatial sub-regions and a space-time autoregressive moving average process (STARMA) is identified with the vibration responses of certain sub-region. And then the responses of other sub-regions are predicted with the identified model. The predicted errors are used as the criterion to detect and locate the damage by comparing the predicted responses with the practical ones. The validity of the methodology is demonstrated by a numerical model of a cantilever plate with cracks. The results show that this method can effectively detect the location and the size of the cracks even in the absence of the prior knowledge about the undamaged structure.
Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?
ERIC Educational Resources Information Center
Mette, Ian M.
2013-01-01
This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during theâ€¦
Turnaround as Reform: Opportunity for Meaningful Change or Neoliberal Posturing?
ERIC Educational Resources Information Center
Mette, Ian M.
2013-01-01
This study explores the neoliberal agenda of turnaround school reform efforts in America by examining the application and transformation of a Midwest State Turnaround Schools Project for the public school system. Perceptions of administrators and state-level policy actors are considered. Data were collected from 13 participants during the…
The Effect of Locus of Control on School Turnaround
ERIC Educational Resources Information Center
Walston, Bradford
2012-01-01
This research focused on the school turnaround process in six turnaround elementary schools located in urban and rural areas of the state of North Carolina. The purpose of the study was to learn about the challenges facing the six schools, the process of improving student achievement, and, more specifically, the degree to which adaptive leadershipâ€¦
"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform
ERIC Educational Resources Information Center
Johnson, Amanda Walker
2013-01-01
"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberalâ€¦
"Turnaround" as Shock Therapy: Race, Neoliberalism, and School Reform
ERIC Educational Resources Information Center
Johnson, Amanda Walker
2013-01-01
"Turnaround" strategies of educational reform promise that school closure, reconstitution, privatizing, and reopening them will bring miraculous results. Questioning the implications, this article situates "turnaround" strategies locally, following the closure of a predominantly minority high school in 2008, in Austin, Texas. The neoliberal…
The State Role in School Turnaround: Emerging Best Practices
ERIC Educational Resources Information Center
Rhim, Lauren Morando, Ed.; Redding, Sam, Ed.
2014-01-01
This publication explores the role of the state education agency (SEA) in school turnaround efforts. An emphasis is placed on practical application of research and best practices related to the SEA's critical leadership role in driving and supporting successful school turnaround efforts. The publication is organized around the four goals of…
Time-Averaged Adiabatic Potentials: Versatile Matter-Wave Guides and Atom Traps
Lesanovsky, Igor; Klitzing, Wolf von
2007-08-24
We demonstrate a novel class of trapping potentials, time-averaged adiabatic potentials (TAAP), which allows the generation of a large variety of traps for quantum gases and matter-wave guides for atom interferometers. Examples include stacks of pancakes, rows of cigars, and multiple rings or sickles. The traps can be coupled through controllable tunneling barriers or merged altogether. We present analytical expressions for pancake-, cigar-, and ring-shaped traps. The ring geometry is of particular interest for guided matter-wave interferometry as it provides a perfectly smooth waveguide of widely tunable diameter and thus adjustable sensitivity of the interferometer. The flexibility of the TAAP would make possible the use of Bose-Einstein condensates as coherent matter waves in large-area atom interferometers.
ARECIBO MULTI-FREQUENCY TIME-ALIGNED PULSAR AVERAGE-PROFILE AND POLARIZATION DATABASE
Hankins, Timothy H.; Rankin, Joanna M. E-mail: Joanna.Rankin@uvm.edu
2010-01-15
We present Arecibo time-aligned, total intensity profiles for 46 pulsars over an unusually wide range of radio frequencies and multi-frequency, polarization-angle density diagrams, and/or polarization profiles for 57 pulsars at some or all of the frequencies 50, 111/130, 430, and 1400 MHz. The frequency-dependent dispersion delay has been removed in order to align the profiles for study of their spectral evolution, and wherever possible the profiles of each pulsar are displayed on the same longitude scale. Most of the pulsars within Arecibo's declination range that are sufficiently bright for such spectral or single pulse analysis are included in this survey. The calibrated single pulse sequences and average profiles are available by web download for further study.
NASA Astrophysics Data System (ADS)
Styk, A.; Brzezi?ski, M.
2011-05-01
The authors of the paper propose a novel approach to the analysis of fringe patterns described by the Bessel function. This kind of patterns can be met while using Time Averaging Interferometry for vibration investigations. The directional spatial carrier phase shifting technique (one of the automatic fringe pattern analysis methods) is proposed to decode the information encoded in the function argument. With additional correction process (the analyzed J0 function differs from the sinusoidal one) the investigated object vibration amplitude may be evaluated. An unquestionable merit of the proposed technique is its processing simplicity and single pattern analysis scheme. The paper presents features of the proposed approach as well as its possible measurement errors, via extensive numerical simulations. Performed experiments corroborate the theoretical findings.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Analysis of the time-averaged properties of sediment motion in a local scour process
NASA Astrophysics Data System (ADS)
Radice, Alessio; Porta, Giovanni; Franzetti, Silvio
2009-03-01
We examined the kinematics of sediment particles in the proximity of a vertical plate attached to the side wall of a pressurized duct. The laboratory phenomenon investigated was archetypal of local scour near bridge abutments in covered rivers. We conducted a clear-water scour experiment where we measured the temporal evolution of the local erosion depth; the latter was found to follow a logarithmic trend. The concentration and velocity of the moving particles were measured on the basis of digital movies taken at various evolution stages of the process. The spatial pattern of the sediment motion showed that the action of the principal vortex system becomes progressively more evident with the general lowering of the sediment transport intensity detected as the scour hole increases in time. Furthermore, the phenomenology of local sediment transport was analyzed; key findings of the research include (1) the time-averaged values of the sediment transport rate are linked with those of concentration and velocity through relationships which are almost invariant, regardless of the experimental time and of the location within the scour hole; (2) intermittency and irregularity of the sediment fluxes increase as their mean intensity decreases; (3) the relative weight of the temporal fluctuations of concentration and velocity in the time-mean sediment transport activity is significant throughout the entire process. Potential implications of the present results for the modeling of local scour processes in pressurized-flow conditions are discussed. The considerations made are likely to have analogous applicability to local scour with a free surface, since the large-scale evolutions of the erosion depth in the two conditions may not be greatly different. Given the qualitative analogy of the present findings with earlier ones obtained for one-dimensional bed load, it is suggested that similar considerations may hold also for the small-scale modeling of a wider class of sediment transport processes.
Modified box dimension and average weighted receiving time on the weighted fractal networks
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Modified box dimension and average weighted receiving time on the weighted fractal networks.
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-01-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is. PMID:26666355
Modified box dimension and average weighted receiving time on the weighted fractal networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xi, Lifeng; Su, Weiyi
2015-12-01
In this paper a family of weighted fractal networks, in which the weights of edges have been assigned to different values with certain scale, are studied. For the case of the weighted fractal networks the definition of modified box dimension is introduced, and a rigorous proof for its existence is given. Then, the modified box dimension depending on the weighted factor and the number of copies is deduced. Assuming that the walker, at each step, starting from its current node, moves uniformly to any of its nearest neighbors. The weighted time for two adjacency nodes is the weight connecting the two nodes. Then the average weighted receiving time (AWRT) is a corresponding definition. The obtained remarkable result displays that in the large network, when the weight factor is larger than the number of copies, the AWRT grows as a power law function of the network order with the exponent, being the reciprocal of modified box dimension. This result shows that the efficiency of the trapping process depends on the modified box dimension: the larger the value of modified box dimension, the more efficient the trapping process is.
NASA Technical Reports Server (NTRS)
Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.
1995-01-01
Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.
Time-weighted average SPME analysis for in planta determination of cVOCs.
Sheehan, Emily M; Limmer, Matt A; Mayer, Philipp; Karlson, Ulrich Gosewinkel; Burken, Joel G
2012-03-20
The potential of phytoscreening for plume delineation at contaminated sites has promoted interest in innovative, sensitive contaminant sampling techniques. Solid-phase microextraction (SPME) methods have been developed, offering quick, undemanding, noninvasive sampling without the use of solvents. In this study, time-weighted average SPME (TWA-SPME) sampling was evaluated for in planta quantification of chlorinated solvents. TWA-SPME was found to have increased sensitivity over headspace and equilibrium SPME sampling. Using a variety of chlorinated solvents and a polydimethylsiloxane/carboxen (PDMS/CAR) SPME fiber, most compounds exhibited near linear or linear uptake over the sampling period. Smaller, less hydrophobic compounds exhibited more nonlinearity than larger, more hydrophobic molecules. Using a specifically designed in planta sampler, field sampling was conducted at a site contaminated with chlorinated solvents. Sampling with TWA-SPME produced instrument responses ranging from 5 to over 200 times higher than headspace tree core sampling. This work demonstrates that TWA-SPME can be used for in planta detection of a broad range of chlorinated solvents and methods can likely be applied to other volatile and semivolatile organic compounds. PMID:22332592
New device for time-averaged measurement of volatile organic compounds (VOCs).
Santiago Sánchez, Noemí; Tejada Alarcón, Sergio; Tortajada Santonja, Rafael; Llorca-Pórcel, Julio
2014-07-01
Contamination by volatile organic compounds (VOCs) in the environment is an increasing concern since these compounds are harmful to ecosystems and even to human health. Actually, many of them are considered toxic and/or carcinogenic. The main sources of pollution come from very diffuse focal points such as industrial discharges, urban water and accidental spills as these compounds may be present in many products and processes (i.e., paints, fuels, petroleum products, raw materials, solvents, etc.) making their control difficult. The presence of these compounds in groundwater, influenced by discharges, leachate or effluents of WWTPs is especially problematic. In recent years, law has been increasingly restrictive with the emissions of these compounds. From an environmental point of view, the European Water Framework Directive (2000/60/EC) sets out some VOCs as priority substances. This binding directive sets guidelines to control compounds such as benzene, chloroform, and carbon tetrachloride to be at a very low level of concentration and with a very high frequency of analysis. The presence of VOCs in the various effluents is often highly variable and discontinuous since it depends on the variability of the sources of contamination. Therefore, in order to have complete information of the presence of these contaminants and to effectively take preventive measures, it is important to continuously control, requiring the development of new devices which obtain average concentrations over time. As of today, due to technical limitations, there are no devices on the market that allow continuous sampling of these compounds in an efficient way and to facilitate sufficient detection limits to meet the legal requirements which are capable of detecting very sporadic and of short duration discharges. LABAQUA has developed a device which consists of a small peristaltic pump controlled by an electronic board that governs its operation by pre-programming. A constant flow passes through a glass cell containing adsorbent material where the VOCs are retained. The adsorbent used, made in LABAQUA, is a mixture of alginic acid and activated carbon. Due to its high permeability it allows the passage and retention of THMs in a suitable way, thus solving many of the problems of other common adsorbents. Also, to avoid degradation of the adsorbent, it is wrapped in a low density polyethylene (LDPE) membrane. After a sampling period of between 1 and 14 days, the adsorbent is collected and analyzed in the laboratory to quantify the VOC average concentration. This device resolves some of the limitations of the classical sampling system (spot samples), since we will take into account the fluctuations in the concentration of VOCs by averaging the same over time. This study presents the results obtained by the device for quantifying the VOCs legislated in the Directive 2000/60/EC. We present the validation of linearity over time and the limits of quantification, as well as the results of sample rate (Rs) obtained for each compound. The results demonstrate the high robustness and high sensitivity of the device. In addition the system has been validated in real waste water samples, comparing the results obtained with this device with the values of classical spot sampling, obtaining excellent results. PMID:24388502
NASA Astrophysics Data System (ADS)
Brown, Gordon C.; Pryputniewicz, Ryszard J.
1998-05-01
An optical microscope, utilizing the principles of time- averaged hologram interferometry, is described for microelectromechanical systems (MEMS) applications. MEMS are devices fabricated via techniques such as microphotolithography to create miniature actuators and sensors. Many of these sensors are currently deployed in automotive applications which rely on, or depend on, the dynamic behavior of the sensor, e.g., airbag sensors, ride monitoring suspensions sensors, etc. Typical dimensions of current MEMS devices are measured in micrometers, a small fraction of the diameter of a human hair, and the current trends is to further decrease the size of MEMS devices to submicrometer dimensions. However, the smaller MEMS become, the more challenging it is to measure with accuracy the dynamic characteristics of these devices. An electro-optic holographic microscope (EOHM) for the purpose of studying the dynamic behavior of MEMS type devices is described. Additionally, by performing phase measurements within an EOHM image, object displacements are determined as illustrated by representative examples. With the EOHM, devices with surface sizes ranging from approximately 35 X 400 to 5 X 18 micrometers are studied while undergoing resonant vibrations at frequencies as high as 2 MHz.
NASA Astrophysics Data System (ADS)
Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.
2009-02-01
In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (?CT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.
NASA Astrophysics Data System (ADS)
Umemura, K.; Ebina, K.
2014-12-01
Climate is the average of weather over some time period and shows characteristic behavior in each time scale. In paleoclimatic research, values of climate variables are measured from proxies that give time series of time-averaged variables. Therefore, understanding the dynamics of time-averaged variable is important to investigate climate variations thorough different time scales. In our recent study, we formulated how stochastic dynamics changes corresponding to averaging time intervals using one dimensional first order stochastic differential equation which contains parametrically controlled terms of deterministic single-well or double-well potential force and random force. The dynamics of time-averaged variable is described by conditional probability density function. In the case of single-well, the function is analytically derived as normal distribution with scaling parameters. In the case of double-well potential, the function is obtained as skew generalized normal distribution function through numerical simulations. The mathematical framework of stochastic dynamics of time-averaged variable is general and applicable to analysis of many kinds of climate time series data. In this study, we apply the above framework to the analysis of proxy data from ice core and discuss about time scaling of the past climate variations. We test several models to infer the optimal model description for the data.
Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y
2016-02-01
Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P((3))(t), which is given by an overlap of a first order wave packet, ?2 ((1))(pu,t), prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, ?1 ((2))(pr(?),pu,t), that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant FSRS, ?2 ((1))(pu,t) resembles the zeroth order wave packet ?1 ((0))(t) on the lower PES spatially, but with a force on ?2 ((1))(pu,t) along the coordinates of the reporter modes due to displacements in the equilibrium position, so that ?1 ((2))(pr(?),pu,t) will oscillate along those coordinates thus giving rise to similar oscillations in P((3))(t) with the frequencies of the reporter modes. So, by recovering P((3))(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, ??j(t), of the reporter modes j along the trajectory of ?1 ((0))(t). The observable FSRS Raman gain is related to the imaginary part of P((3))(?). The imaginary and real parts of P((3))(?) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P((3))(?), whose Fourier transform then gives us the complex P((3))(t) to analyze for ??j(t). We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational frequency up-shift time constants for the C12-H wagging mode at 216 fs and for the C10-H wagging mode at 161 fs which are larger than for the C11-H wagging mode at 127 fs, i.e., the C11-H wagging mode arrives at its final frequency while the C12-H and C10-H wagging modes are still up-shifting to their final values, agreeing with the findings of Yan et al. [Biochemistry 43, 10867 (2004)]. PMID:26851905
NASA Astrophysics Data System (ADS)
Wu, Yue-Chao; Zhao, Bin; Lee, Soo-Y.
2016-02-01
Femtosecond stimulated Raman spectroscopy (FSRS) on the Stokes side arises from a third order polarization, P(3)(t), which is given by an overlap of a first order wave packet, |" separators=" Î¨2 ( 1 ) ( p u , t ) > , prepared by a narrow band (ps) Raman pump pulse, Epu(t), on the upper electronic e2 potential energy surface (PES), with a second order wave packet, <" separators=" Î¨1 ( 2 ) ( p r â— , p u , t ) | , that is prepared on the lower electronic e1 PES by a broadband (fs) probe pulse, Epr(t), acting on the first-order wave packet. In off-resonant FSRS, |" separators=" Î¨2 ( 1 ) ( p u , t ) > resembles the zeroth order wave packet |" separators=" Î¨1 ( 0 ) ( t ) > on the lower PES spatially, but with a force on |" separators=" Î¨2 ( 1 ) ( p u , t ) > along the coordinates of the reporter modes due to displacements in the equilibrium position, so that <" separators=" Î¨1 ( 2 ) ( p r â— , p u , t ) | will oscillate along those coordinates thus giving rise to similar oscillations in P(3)(t) with the frequencies of the reporter modes. So, by recovering P(3)(t) from the FSRS spectrum, we are able to deduce information on the time-dependent quantum-mechanical wave packet averaged frequencies, Ď‰ ÂŻ j ( t ) , of the reporter modes j along the trajectory of |" separators=" Î¨1 ( 0 ) ( t ) > . The observable FSRS Raman gain is related to the imaginary part of P(3)(Ď‰). The imaginary and real parts of P(3)(Ď‰) are related by the Kramers-Kronig relation. Hence, from the FSRS Raman gain, we can obtain the complex P(3)(Ď‰), whose Fourier transform then gives us the complex P(3)(t) to analyze for Ď‰ ÂŻ j ( t ) . We apply the theory, first, to a two-dimensional model system with one conformational mode of low frequency and one reporter vibrational mode of higher frequency with good results, and then we apply it to the time-resolved FSRS spectra of the cis-trans isomerization of retinal in rhodopsin [P. Kukura et al., Science 310, 1006 (2005)]. We obtain the vibrational frequency up-shift time constants for the C12-H wagging mode at 216 fs and for the C10-H wagging mode at 161 fs which are larger than for the C11-H wagging mode at 127 fs, i.e., the C11-H wagging mode arrives at its final frequency while the C12-H and C10-H wagging modes are still up-shifting to their final values, agreeing with the findings of Yan et al. [Biochemistry 43, 10867 (2004)].
34. BOILER HOUSE, COAL CONVEYOR AND TURNAROUND TRACK FOR COAL ...
34. BOILER HOUSE, COAL CONVEYOR AND TURN-AROUND TRACK FOR COAL CARS (NOTE: COAL CAR No. 6 IN FAR BACK GROUND) - Delaware County Electric Company, Chester Station, Delaware River at South end of Ward Street, Chester, Delaware County, PA
Turnaround operations analysis for OTV. Volume 2: Detailed technical report
NASA Technical Reports Server (NTRS)
1988-01-01
The objectives and accomplishments were to adapt and apply the newly created database of Shuttle/Centaur ground operations. Previously defined turnaround operations analyses were to be updated for ground-based OTVs (GBOTVs) and space-based OTVs (SBOTVs), design requirements identified for both OTV and Space Station accommodations hardware, turnaround operations costs estimated, and a technology development plan generated to develop the required capabilities. Technical and programmatic data were provided for NASA pertinent to OTV round and space operations requirements, turnaround operations, task descriptions, timelines and manpower requirements, OTV modular design and booster and Space Station interface requirements. SBOTV accommodations development schedule, cost and turnaround operations requirements, and a technology development plan for ground and space operations and space-based accommodations facilities and support equipment. Significant conclusion are discussed.
Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics
NASA Astrophysics Data System (ADS)
Nishimichi, Takahiro; Valageas, Patrick
2015-12-01
We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (?+n )- and n -point polyspectra (i.e., the Fourier counterparts of correlation functions) of the cosmological matter density field. Focusing on the case of the ?=1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our relation on the bispectrum in the squeezed limit (i.e., the limit where one wave number is much smaller than the other two) is valid to better than 20% up to 1 h Mpc-1 , for both the monopole and quadrupole at z =0.35 , in a ? CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1 h Mpc-1 , at z =0.35 for the monopole. On larger scales, k =0.2 h Mpc-1 , it still holds within the statistical accuracy of idealized simulations of volume 8 h-3Gpc3 without shot-noise error.
Mercury's Time-Averaged and Induced Magnetic Fields from MESSENGER Observations
NASA Astrophysics Data System (ADS)
Johnson, C. L.; Winslow, R. M.; Anderson, B. J.; Purucker, M. E.; Korth, H.; Al Asad, M. M.; Slavin, J. A.; Baker, D. N.; Hauck, S. A.; Phillips, R. J.; Zuber, M. T.; Solomon, S. C.
2012-12-01
Observations from MESSENGER's Magnetometer (MAG) have allowed the construction of a baseline, time-averaged model for Mercury's magnetosphere. The model, constructed with the approximation that the magnetospheric shape can be represented as a paraboloid, includes two external (magnetopause and magnetotail) current systems and an internal (dipole) field. We take advantage of the geometry of the orbital MAG data to constrain all but one of the model parameters, and their ranges, directly from the observations. These parameters are then used as a priori constraints in the magnetospheric model, and the remaining parameter, the dipole moment, is estimated from a grid search. The model provides an excellent fit to the MAG observations, with a root-mean-square misfit of less than 20 nT globally. The mean distance from the planetary dipole origin to the magnetopause subsolar point, RSS, is 1.45 RM (where RM = 2440 km) and the mean planetary dipole moment is 190 nT- RM3. Temporal variations in the global-scale magnetic fields result from changes in solar wind ram pressure, Pram, at Mercury that arise from the planet's 88-day eccentric orbit around the Sun and from transient, rapid changes in solar wind conditions. For a constant planetary dipole moment, RSS varies as Pram-1/6. However, magnetopause crossings obtained from several Mercury years of MESSENGER observations indicate that RSS is proportional to Pram-1/a where a is greater than 6, suggesting induction in Mercury's highly conducting metallic interior. We obtain an effective dipole moment that varies by up to Ëś15% about its mean value. We further investigate the periodic 88-day induction signature and use the paraboloid model to describe the spatial structure in the inducing magnetopause field, together with estimates for the outer radius of Mercury's liquid core and possible overlying solid iron sulfide layer, to calculate induced core fields. The baseline magnetospheric model is adapted to include the 88-day periodic induction signature, and residuals to this time-varying global model from magnetically quiet orbits are then used to investigate structure at higher degree and order in the internal and external fields.
Where the world stands still: turnaround as a strong test of ?CDM cosmology
Pavlidou, V.; Tomaras, T.N. E-mail: tomaras@physics.uoc.gr
2014-09-01
Our intuitive understanding of cosmic structure formation works best in scales small enough so that isolated, bound, relaxed gravitating systems are no longer adjusting their radius; and large enough so that space and matter follow the average expansion of the Universe. Yet one of the most robust predictions of ?CDM cosmology concerns the scale that separates these limits: the turnaround radius, which is the non-expanding shell furthest away from the center of a bound structure. We show that the maximum possible value of the turnaround radius within the framework of the ?CDM model is, for a given mass M, equal to (3GM/? c{sup 2}){sup 1/3}, with G Newton's constant and c the speed of light, independently of cosmic epoch, exact nature of dark matter, or baryonic effects. We discuss the possible use of this prediction as an observational test for ?CDM cosmology. Current data appear to favor ?CDM over alternatives with local inhomogeneities and no ?. However there exist several local-universe structures that have, within errors, reached their limiting size. With improved determinations of their turnaround radii and the enclosed mass, these objects may challenge the limit and ?CDM cosmology.
Code of Federal Regulations, 2010 CFR
2010-07-01
... geometric average concentrations of sulfur dioxide emissions. If you are monitoring the percent reduction of... daily geometric average percent reduction of potential sulfur dioxide emissions. (c) If you operate...
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..
Time-Averaged Velocity, Temperature and Density Surveys of Supersonic Free Jets
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.; Mielke, Amy F.
2005-01-01
A spectrally resolved molecular Rayleigh scattering technique was used to simultaneously measure axial component of velocity U, static temperature T, and density p in unheated free jets at Mach numbers M = 0.6,0.95, 1.4 and 1.8. The latter two conditions were achieved using contoured convergent-divergent nozzles. A narrow line-width continuous wave laser was passed through the jet plumes and molecular scattered light from a small region on the beam was collected and analyzed using a Fabry-Perot interferometer. The optical spectrum analysis air density at the probe volume was determined by monitoring the intensity variation of the scattered light using photo-multiplier tubes. The Fabry-Perot interferometer was operated in the imaging mode, whereby the fringe formed at the image plane was captured by a cooled CCD camera. Special attention was given to remove dust particles from the plume and to provide adequate vibration isolation to the optical components. The velocity profiles from various operating conditions were compared with that measured by a Pitot tube. An excellent comparison within 5m's demonstrated the maturity of the technique. Temperature was measured least accurately, within 10K, while density was measured within 1% uncertainty. The survey data consisted of centerline variations and radial profiles of time-averaged U, T and p. The static temperature and density values were used to determine static pressure variations inside the jet. The data provided a comparative study of jet growth rates with increasing Mach number. The current work is part of a data-base development project for Computational Fluid Dynamics and Aeroacoustics codes that endeavor to predict noise characteristics of high speed jets. A limited amount of far field noise spectra from the same jets are also presented. Finally, a direct experimental validation was obtained for the Crocco-Busemann equation which is commonly used to predict temperature and density profiles from known velocity profiles. Data presented in this paper are available in ASCII format upon request.
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
NASA Astrophysics Data System (ADS)
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the potential to significantly increase the flexibility of hybrid rarefied/continuum flow analyses.
Diagnostic quality of time-averaged ECG-gated CT data
NASA Astrophysics Data System (ADS)
Klein, Almar; Oostveen, Luuk J.; Greuter, Marcel J. W.; Hoogeveen, Yvonne; Schultze Kool, Leo J.; Slump, Cornelis H.; Renema, W. Klaas Jan
2009-02-01
Purpose: ECG-gated CTA allows visualization of the aneurysm and stentgraft during the different phases of the cardiac cycle, although with a lower SNR per cardiac phase than without ECG gating using the same dose. In our institution, abdominal aortic aneurysm (AAA) is evaluated using non-ECG-gated CTA. Some common CT scanners cannot reconstruct a non-gated volume from ECG-gated acquired data. In order to obtain the same diagnostic image quality, we propose offline temporal averaging of the ECG-gated data. This process, though straightforward, is fundamentally different from taking a non-gated scan, and its result will certainly differ as well. The purpose of this study is to quantitatively investigate how good off-line averaging approximates a non-gated scan. Method: Non-gated and ECG-gated CT scans have been performed on a phantom (Catphan 500). Afterwards the phases of the ECG-gated CTA data were averaged to create a third dataset. The three sets are compared with respect to noise properties (NPS) and frequency response (MTF). To study motion artifacts identical scans were acquired on a programmable dynamic phantom. Results and Conclusions: The experiments show that the spatial frequency content is not affected by the averaging process. The minor differences observed for the noise properties and motion artifacts are in favor of the averaged data. Therefore the averaged ECG-gated phases can be used for diagnosis. This enables the use of ECG-gating for research on stentgrafts in AAA, without impairing clinical patient care.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta
2004-01-01
Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature information could reliably be recovered, however the turbulence estimates contained significant uncertainty. Resulting flow parameter estimates are presented for surveys of Mach 0.6, 0.95, and 1.4 jet flows. Velocity, density, and temperature were determined with accuracies of 5 m/s, 1.5%, and 1%, respectively, in flows with no particles present, and with accuracies of 5 m/s, 1-4%, and 2% in flows with particles. Comparison with hotwire data for the Mach 0.6 condition demonstrated turbulence estimates with accuracies of about 5 m/s outside the jet core where Mie scattering from dust/smoke particulates aided in the estimation of turbulence. Turbulence estimates could not be recovered with any significant accuracy for measurement points where no particles were present.
Discrete Randomness in Discrete Time Quantum Walk: Study Via Stochastic Averaging
NASA Astrophysics Data System (ADS)
Ellinas, D.; Bracken, A. J.; Smyrnakis, I.
2012-10-01
The role of classical noise in quantum walks (QW) on integers is investigated in the form of discrete dichotomic random variable affecting its reshuffling matrix parametrized as a SU2)/U (1) coset element. Analysis in terms of quantum statistical moments and generating functions, derived by the completely positive trace preserving (CPTP) map governing evolution, reveals a pronounced eventual transition in walk's diffusion mode, from a quantum ballistic regime with rate O(t) to a classical diffusive regime with rate O(?{t}), when condition (strength of noise parameter)2 × (number of steps) = 1, is satisfied. The role of classical randomness is studied showing that the randomized QW, when treated on the stochastic average level by means of an appropriate CPTP averaging map, turns out to be equivalent to a novel quantized classical walk without randomness. This result emphasizes the dual role of quantization/randomization in the context of classical random walk.
Empirical wind model for the middle and lower atmosphere. Part 1: Local time average
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Franke, S. J.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Gradient winds from CIRA-86 plus rocket soundings, incoherent scatter radar, MF radar, and meteor radar provide the data base and are supplemented by previous data driven model summaries. Low-order spherical harmonics and Fourier series are used to describe the major variations throughout the atmosphere including latitude, annual, semiannual, and longitude (stationary wave 1). The model represents a smoothed compromise between the data sources. Although agreement between various data sources is generally good, some systematic differences are noted, particularly near the mesopause. Root mean square differences between data and model are on the order of 15 m/s in the mesosphere and 10 m/s in the stratosphere for zonal wind, and 10 m/s and 4 m/s, respectively, for meridional wind.
Kodra, Evan A; Chatterjee, Snigdhansu; Ganguly, Auroop R
2010-01-01
Detection and attribution methodologies have been developed over the years to delineate anthropogenic from natural drivers of climate change and impacts. A majority of prior attribution studies, which have used climate model simulations and observations or reanalysis datasets, have found evidence for humaninduced climate change. This papers tests the hypothesis that Granger causality can be extracted from the bivariate series of globally averaged land surface temperature (GT) observations and observed CO2 in the atmosphere using a reverse cumulative Granger causality test. This proposed extension of the classic Granger causality test is better suited to handle the multisource nature of the data and provides further statistical rigor. The results from this modified test show evidence for Granger causality from a proxy of total radiative forcing (RC), which in this case is a transformation of atmospheric CO2, to GT. Prior literature failed to extract these results via the standard Granger causality test. A forecasting test shows that a holdout set of GT can be better predicted with the addition of lagged RC as a predictor, lending further credibility to the Granger test results. However, since second-order-differenced RC is neither normally distributed nor variance stationary, caution should be exercised in the interpretation of our results.
How complex is the time-averaged geomagnetic field over the past 5 Myr?
NASA Astrophysics Data System (ADS)
Carlut, Julie; Courtillot, Vincent
1998-08-01
A basic tenet of palaeomagnetism is that the Earth's magnetic field behaves on average like that of a central axial dipole (g01 ). Nevertheless, the question of possible persistent second-order features is still open. Recently Johnson & Constable (1995, 1996) performed a regularized non-linear inversion and found evidence for persistent non-zonal features. Formal uncertainties would indicate that there are significant (non-zero) terms at least up to degree and order 4. Using a recent compilation of two different data sets from lavas (0 to 5 Ma) and the Johnson & Constable codes, we test the robustness of this result. The data set has been divided into three subsets: the Brunhes polarity data (B), all normal polarity data (N) and all reverse data (R). In each subset of data, a prominent g02 , of the order of 5 per cent of g01 , is clearly present, as previously established by several authors. In some subsets, smaller terms appear: g22 and g11 in the Brunhes data, h13 and h12 in N, and h12 , g03 and g33 in R. A threshold under which terms resulting from the inversion cannot yet be considered as robust appears to be of the order of 300 nT. Indeed, tests show that many terms, which are different for each epoch (B, N or R), may be artefacts due to aliasing because of poor site distribution, or due to the underestimation of a priori errors in the data; these could result from undetected tectonic rotations, non-horizontal palaeoslopes, or viscous overprints. Because of these limitations in resolution, it may not yet be possible to identify robustly terms other than the axial dipole and quadrupole. The persistence of high-latitude flux concentrations, hemispheric asymmetry or normal versus reversed field asymmetry cannot yet be considered as demonstrated.
ERIC Educational Resources Information Center
Deal, Willard M., Jr.; And Others
This study was conducted to ascertain the effects of employment on the academic performance of community college students. The grade point averages of 830 full-time students at Wilkes Community College (North Wilkesboro, N.C.) were analyzed to determine: (1) if working students' GPA's differed significantly from those of the total student…
NASA Astrophysics Data System (ADS)
Uneyama, Takashi; Miyaguchi, Tomoshige; Akimoto, Takuma
2015-09-01
The mean-square displacement (MSD) is widely utilized to study the dynamical properties of stochastic processes. The time-averaged MSD (TAMSD) provides some information on the dynamics which cannot be extracted from the ensemble-averaged MSD. In particular, the relative standard deviation (RSD) of the TAMSD can be utilized to study the long-time relaxation behavior. In this work, we consider a class of Langevin equations which are multiplicatively coupled to time-dependent and fluctuating diffusivities. Various interesting dynamics models such as entangled polymers and supercooled liquids can be interpreted as the Langevin equations with time-dependent and fluctuating diffusivities. We derive a general formula for the RSD of the TAMSD for the Langevin equation with the time-dependent and fluctuating diffusivity. We show that the RSD can be expressed in terms of the correlation function of the diffusivity. The RSD exhibits the crossover at the long time region. The crossover time is related to a weighted average relaxation time for the diffusivity. Thus the crossover time gives some information on the relaxation time of fluctuating diffusivity which cannot be extracted from the ensemble-averaged MSD. We discuss the universality and possible applications of the formula via some simple examples.
Uneyama, Takashi; Miyaguchi, Tomoshige; Akimoto, Takuma
2015-09-01
The mean-square displacement (MSD) is widely utilized to study the dynamical properties of stochastic processes. The time-averaged MSD (TAMSD) provides some information on the dynamics which cannot be extracted from the ensemble-averaged MSD. In particular, the relative standard deviation (RSD) of the TAMSD can be utilized to study the long-time relaxation behavior. In this work, we consider a class of Langevin equations which are multiplicatively coupled to time-dependent and fluctuating diffusivities. Various interesting dynamics models such as entangled polymers and supercooled liquids can be interpreted as the Langevin equations with time-dependent and fluctuating diffusivities. We derive a general formula for the RSD of the TAMSD for the Langevin equation with the time-dependent and fluctuating diffusivity. We show that the RSD can be expressed in terms of the correlation function of the diffusivity. The RSD exhibits the crossover at the long time region. The crossover time is related to a weighted average relaxation time for the diffusivity. Thus the crossover time gives some information on the relaxation time of fluctuating diffusivity which cannot be extracted from the ensemble-averaged MSD. We discuss the universality and possible applications of the formula via some simple examples. PMID:26465459
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…
Importing Leaders for School Turnarounds: Lessons and Opportunities
ERIC Educational Resources Information Center
Kowal, Julie; Hassel, Emily Ayscue
2011-01-01
One of the biggest challenges in education today is identifying talented candidates to successfully lead turnarounds of persistently low-achieving schools. Evidence suggests that the traditional principal pool is already stretched to capacity and cannot supply enough leaders to fix failing schools. But potentially thousands of leaders capable of…
California Turnaround Schools: An Analysis of School Improvement Grant Effectiveness
ERIC Educational Resources Information Center
Graham, Khalil N.
2013-01-01
The purpose of this study was to evaluate the effectiveness of School Improvement Grants (SIGs) in the state of California (CA) in increasing student achievement using the turnaround implementation model. The American Recovery and Reinvestment Act of 2009 (ARRA) included educational priorities focused on fixing America's lowest achieving schools.…
Negotiation Areas for "Transformation" and "Turnaround" Intervention Models
ERIC Educational Resources Information Center
Mass Insight Education (NJ1), 2011
2011-01-01
To receive School Improvement Grant (SIG) funding, districts must submit an application to the state that outlines their strategic plan to implement one of four intervention models in their persistently lowest-achieving schools. The four intervention models include: (1) School Closure; (2) Restart; (3) Turnaround; and (4) Transformation. The…
Turnaround and Closure Rates in the Charter and District Sectors
ERIC Educational Resources Information Center
Stuit, David
2012-01-01
This article examines the turnaround and closure rates of charter and district schools in 10 states to determine if one school sector has done comparatively better than the other at addressing chronically low-performing schools. Low-performing charter and district schools were identified in 2003-04 based on their reading and math proficiency rates…
Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround
ERIC Educational Resources Information Center
Peurach, Donald J.; Neumerski, Christine M.
2015-01-01
The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state educationâ€¦
Can Principals' Emotional Intelligence Matter to School Turnarounds?
ERIC Educational Resources Information Center
Cai, Qijie
2011-01-01
Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: the…
Can Principals' Emotional Intelligence Matter to School Turnarounds?
ERIC Educational Resources Information Center
Cai, Qijie
2011-01-01
Through the literature review, the study aims to explore the relationship between the emotional intelligence (EI) of principals and the turnarounds of low-performing schools and generate some hypotheses regarding this issue. Rigorous, empirical studies published in the last 15 years are selected and reviewed around three relevant topics: theâ€¦
A miniature turn-around for distributed temperature sensing
NASA Astrophysics Data System (ADS)
Sun, Xiaoguang; Burgess, David T.; Bedard, Kyle; Li, Jie; Hines, Mike
2015-05-01
In many fiber optic distributed temperature sensing (DTS) systems, a dual-ended configuration can correct the temperature measurement error associated with wavelength dependent loss (WDL) of the optical fiber and can provide a more accurate temperature measurement in comparison with a single-ended fiber system. In this configuration, two pieces of fiber are laid parallel to each other and connected at the distal end by a turn-around device, creating a U-shaped optical path that provides accessibility to both legs from the proximal end of the system. In many applications, tightly confined spaces constrain the fiber bend diameter and thus the size of the turn-around device. In this paper we will report a miniature turn-around built with a short section of a graded index (GI) fiber. The device measures less than 300 ?m in diameter and less than 2 mm in length. The insertion loss of the miniature turn-around is measured and will be compared with the theoretical simulations.
Turnaround and Closure Rates in the Charter and District Sectors
ERIC Educational Resources Information Center
Stuit, David
2012-01-01
This article examines the turnaround and closure rates of charter and district schools in 10 states to determine if one school sector has done comparatively better than the other at addressing chronically low-performing schools. Low-performing charter and district schools were identified in 2003-04 based on their reading and math proficiency ratesâ€¦
129. Back side technical facilities passageways between turnaround trans. & ...
129. Back side technical facilities passageways between turnaround trans. & building no. 104, "plans, sections & elevations" - architectural, AS-BLT AW 36-25-13, sheet 9 of 40, dated 23 November, 1960. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
Policy Perspective: School Turnaround in England. Utilizing the Private Sector
ERIC Educational Resources Information Center
Corbett, Julie
2014-01-01
This paper, written by strategic partner of the Center on School Turnaround (CST), Julie Corbett, provides research and examples on England's approach to turning around its lowest performing schools. The English education system utilizes private vendors to support chronically low-performing schools and districts. The introduction is followed byâ€¦
The Rural Turnaround in Ohio: Some Evidence Related to Implications.
ERIC Educational Resources Information Center
Thomas, Donald W.; Bachtel, Douglas C.
Limited to residents of small towns, villages and the rural open country, the study assessed the implications of the rural turnaround in the southern Ohio counties of Athens, Gallia, Jackson, Meigs, and Vinton. All five counties experienced outmigration in the 1950s, and all but Athens County lost population through outmigration in the 1960s. Inâ€¦
ERIC Educational Resources Information Center
Murphy, Joseph; Meyers, Coby V.
2009-01-01
In this article, we provide a grounded narrative of capacity building in the turnaround equation by exploring the turnaround literature outside of education and applying it to troubled schools. Our analysis is based upon reviews of: (1) 14 comprehensive, historical volumes that examine the turnaround phenomenon; (2) 16 book-length analyses of…
Turnaround Schools as a U-Turn for Student Achievement: The Rotary of School Reform Efforts
ERIC Educational Resources Information Center
Mette, Ian M.
2012-01-01
This case study presents data to support the notion that turnaround school policy can improve the efficiency of how traditionally low-performing schools function. The schools that were successful in implementing the UVA Turnaround Program training developed a clear understanding of the expectations for participating in the UVA Turnaround Program…
Apollo/Saturn 5 space vehicle countdown. Volume 2: Turnaround from scrub
NASA Technical Reports Server (NTRS)
1972-01-01
The procedures required to prepare a space vehicle for subsequent launch attempt after cancelling lift-off activities are presented. The cancellation must occur after the start of cryogenic loading, but prior to initiation of ignition sequences. The sequence of operations necessary to return the space vehicle to a predetermined configuration at which time the launch count can be resumed or rescheduled for following launch opportunities is developed. The considerations and constraints that are the basis for the scrub/turnaround times are described.
Verrier, Nicolas; Atlan, Michael
2013-03-01
We report a demonstration of the measurement of the ratio between an optical modulation sideband component and the nonshifted light component by time-averaged heterodyne holography in off-axis and frequency-shifting configuration, through coherent frequency-division multiplexing with a dual optical local oscillator. Experimental results of sinusoidal vibration sensing are reported. This technique enables absolute measurements of subnanometric out-of-plane vibration amplitudes. PMID:23455283
Leclercq, Mathieu; Karray, Mayssa; Isnard, Vincent; Gautier, François; Picart, Pascal
2013-01-01
This paper proposes a first attempt to visualize and analyze the vibrations induced by a bone-conduction device and propagating at the surface of the skin of a human face. The method is based on a new approach in a so-called quasi-time-averaging regime, resulting in the retrieval of the vibration amplitude and phase from a sequence of digital Fresnel holograms recorded with a high image rate. The design of the algorithm depends on the ratio between the exposure time and the vibration period. The results show the propagation of vibrations at the skin surface, and quantitative analysis is achieved by the proposed approach. PMID:23292386
NASA Astrophysics Data System (ADS)
Hilderman, Trevor; Wilson, David J.
2007-03-01
Plume meandering and averaging time effects were measured directly using a high spatial resolution, high frequency, linescan laser-induced fluorescence (LIF) technique for measuring scalar concentrations in a plume dispersing in a water channel. Post-processing of the collected data removed time dependent background dye levels and corrected for attenuation across the laser beam to produce accurate measurements over long sample times in both a rough surface boundary-layer shear flow and shear free grid-generated turbulent flow. The data were used to verify the applicability of a meandering plume model for predicting the properties of mean and fluctuating concentrations. The centroid position of the crosswind concentration profile was found to have a Gaussian probability density function and the instantaneous plume spread about the centroid fluctuated log-normally. A modified travel-time power law model for averaging time adjustment was developed and compared to the widely used, but much less accurate, 0.2 power-law model.
Epstein, R.
1997-09-01
In inertial confinement fusion (ICF) experiments, irradiation uniformity is improved by passing laser beams through distributed phase plates (DPPs), which produce focused intensity profiles with well-controlled, reproducible envelopes modulated by fine random speckle. [C. B. Burckhardt, Appl. Opt. {bold 9}, 695 (1970); Y. Kato and K. Mima, Appl. Phys. B {bold 29}, 186 (1982); Y. Kato {ital et al.}, Phys. Rev. Lett. {bold 53}, 1057 (1984); Laboratory for Laser Energetics LLE Review 33, NTIS Document No. DOE/DP/40200-65, 1987 (unpublished), p. 1; Laboratory for Laser Energetics LLE Review 63, NTIS Document No. DOE/SF/19460-91, 1995 (unpublished), p. 1.] A uniformly ablating plasma atmosphere acts to reduce the contribution of the speckle to the time-averaged irradiation nonuniformity by causing the intensity distribution to move relative to the absorption layer of the plasma. This occurs most directly as the absorption layer in the plasma moves with the ablation-driven flow, but it is shown that the effect of the accumulating ablated plasma on the phase of the laser light also makes a quantitatively significant contribution. Analytical results are obtained using the paraxial approximation applied to the beam propagation, and a simple statistical model is assumed for the properties of DPPs. The reduction in the time-averaged spatial spectrum of the speckle due to these effects is shown to be quantitatively significant within time intervals characteristic of atmospheric hydrodynamics under typical ICF irradiation intensities. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Ali, Mubarak; Ramirez, Patricio; Nasir, Saima; Nguyen, Quoc-Hung; Ensinger, Wolfgang; Mafe, Salvador
2014-01-01
Rectification in nanopores is usually achieved by a fixed asymmetry in the pore geometry and charge distribution. We show here that nanoparticle blocking of a cylindrical pore induces rectifying properties that can support significant net currents with zero time-average potentials. To describe experimentally this effect, the steady-state current-voltage curves of a single nanopore are obtained for different charge states and relative sizes of the pore and the charged nanoparticles, which are present only on one side. The rectification phenomena observed can find applications in the area of nanofluidics and involves physical concepts that are also characteristic of the blocking of protein ion channels by ionic drugs.
NASA Astrophysics Data System (ADS)
Shonkwiler, K. B.; Ham, J. M.; Williams, C. M.
2013-12-01
Ammonia (NH3) that volatilizes from confined animal feeding operations (CAFOs) can form aerosols that travel long distances where such aerosols can deposit in sensitive regions, potentially causing harm to local ecosystems. However, quantifying the emissions of ammonia from CAFOs through direct measurement is very difficult and costly to perform. A system was therefore developed at Colorado State University for conditionally sampling NH3 concentrations based on weather parameters measured using inexpensive equipment. These systems use passive diffusive cartridges (Radiello, Sigma-Aldrich, St. Louis, MO, USA) that provide time-averaged concentrations representative of a two-week deployment period. The samplers are exposed by a robotic mechanism so they are only deployed when wind is from the direction of the CAFO at 1.4 m/s or greater. These concentration data, along with other weather variables measured during each sampler deployment period, can then be used in a simple inverse model (FIDES, UMR Environnement et Grandes Cultures, Thiverval-Grignon, France) to estimate emissions. There are not yet any direct comparisons of the modeled emissions derived from time-averaged concentration data to modeled emissions from more sophisticated backward Lagrangian stochastic (bLs) techniques that utilize instantaneous measurements of NH3 concentration. In the summer and autumn of 2013, a suite of robotic passive sampler systems were deployed at a 25,000-head cattle feedlot at the same time as an open-path infrared (IR) diode laser (GasFinder2, Boreal Laser Inc., Edmonton, Alberta, Canada) which continuously measured ammonia concentrations instantaneously over a 225-m path. This particular laser is utilized in agricultural settings, and in combination with a bLs model (WindTrax, Thunder Beach Scientific, Inc., Halifax, Nova Scotia, Canada), has become a common method for estimating NH3 emissions from a variety of agricultural and industrial operations. This study will first compare the ammonia concentrations measured with the Radiello system to that measured with the long-path IR laser. Second, NH3 emissions estimated using the simple inverse model (FIDES) and the time-averaged data will be compared to emissions derived from the bLS model (WindTrax) using the laser-based NH3 data. Results could lead to a more cost-efficient and simpler technique for monitoring ammonia fluxes from of CAFOs and other strong areal sources.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Ishii, Toshiki; Hoshizawa, Taku; Takashima, Yuzuru
2015-09-01
Mechanical instabilities during recording and involved degradation of signal quality in high density and high data transfer rate holographic data storage system (HDSS) is one of the obstacles to prevent the technology from being a stable system. We analytically formulated effects of mechanical instabilities of a Galvano mirror and spindle motor on the HDSS by incorporating the concept of a time-averaged holography. Mechanical parameters such as amplitude and frequency of mechanical oscillation are related to optical parameters such as amplitude and phase of reference and signal beams. Especially, the analytical formulation led to a new method of optical and post compensation for mechanical instability during recording hologram. The optical post compensation method enables a robust implementation of HDSS against mechanical instabilities.
Time-averaged distributions of solute and solvent motions: exploring proton wires of GFP and PfM2DH.
Velez-Vega, Camilo; McKay, Daniel J J; Aravamuthan, Vibhas; Pearlstein, Robert; Duca, José S
2014-12-22
Proton translocation pathways of selected variants of the green fluorescent protein (GFP) and Pseudomonas fluorescens mannitol 2-dehydrogenase (PfM2DH) were investigated via an explicit solvent molecular dynamics-based analysis protocol that allows for direct quantitative relationship between a crystal structure and its time-averaged solute-solvent structure obtained from simulation. Our study of GFP is in good agreement with previous research suggesting that the proton released from the chromophore upon photoexcitation can diffuse through an extended internal hydrogen bonding network that allows for the proton to exit to bulk or be recaptured by the anionic chromophore. Conversely for PfM2DH, we identified the most probable ionization states of key residues along the proton escape channel from the catalytic site to bulk solvent, wherein the solute and high-density solvent crystal structures of binary and ternary complexes were properly reproduced. Furthermore, we proposed a plausible mechanism for this proton translocation process that is consistent with the state-dependent structural shifts observed in our analysis. The time-averaged structures generated from our analyses facilitate validation of MD simulation results and provide a comprehensive profile of the dynamic all-occupancy solvation network within and around a flexible solute, from which detailed hydrogen-bonding networks can be inferred. In this way, potential drawbacks arising from the elucidation of these networks by examination of static crystal structures or via alternate rigid-protein solvation analysis procedures can be overcome. Complementary studies aimed at the effective use of our methodology for alternate implementations (e.g., ligand design) are currently underway. PMID:25405925
Meeting the challenge of a group practice turnaround.
Porn, L M
2001-03-01
Many healthcare organizations that acquired group practices to enhance their market share have found that the practices have not met their financial goals. Turning around a financially troubled, hospital-owned group practice is challenging but not impossible for healthcare organizations that take certain basic actions. Direction, data, desire, dedication, and drive must be present to effect the financial turnaround of a group practice. The healthcare organization needs to evaluate the practice's strategy and operations and identify the issues that are hindering the practice's ability to optimize revenues. Efforts to achieve profitable operations have to be ongoing. PMID:11258274
SSME 3-D Turnaround Duct flow analysis - CFD predictions
NASA Astrophysics Data System (ADS)
Brankovic, Andreja; Stowers, Steven T.; McConnaughey, Paul
1988-07-01
CFD analysis is presently employed to obtain an improved flowfield for an individual flowpath in the case of the Space Shuttle Main Engine's High Pressure Fuel Turbopump Turn-Around Duct (TAD), which conducts the flow exiting from the gas turbines into the fuel bowl. It is demonstrated that the application of CFD to TAD flow analysis, giving attention to the duct's configuration and to the number, shape, and alignment of the diffuser struts, can enhance understanding of flow physics and result in improved duct design and performance.
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-01-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4â€“6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100â€“1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Astrophysics Data System (ADS)
Pizzuto, James; Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg; Williamson, Elyse; Karwan, Diana L.; O'Neal, Michael; Marquard, Julia; Aalto, Rolf; Newbold, Denis
2014-02-01
Watershed Best Management Practices (BMPs) are often designed to reduce loading from particle-borne contaminants, but the temporal lag between BMP implementation and improvement in receiving water quality is difficult to assess because particles are only moved downstream episodically, resting for long periods in storage between transport events. A theory is developed that describes the downstream movement of suspended sediment particles accounting for the time particles spend in storage given sediment budget data (by grain size fraction) and information on particle transit times through storage reservoirs. The theory is used to define a suspended sediment transport length scale that describes how far particles are carried during transport events, and to estimate a downstream particle velocity that includes time spent in storage. At 5 upland watersheds of the mid-Atlantic region, transport length scales for silt-clay range from 4 to 60 km, while those for sand range from 0.4 to 113 km. Mean sediment velocities for silt-clay range from 0.0072 km/yr to 0.12 km/yr, while those for sand range from 0.0008 km/yr to 0.20 km/yr, 4-6 orders of magnitude slower than the velocity of water in the channel. These results suggest lag times of 100-1000 years between BMP implementation and effectiveness in receiving waters such as the Chesapeake Bay (where BMPs are located upstream of the characteristic transport length scale). Many particles likely travel much faster than these average values, so further research is needed to determine the complete distribution of suspended sediment velocities in real watersheds.
NASA Technical Reports Server (NTRS)
Huff, Edward M.; Mosher, Marianne; Barszcz, Eric
2002-01-01
Recent research using NASA Ames AH-1 and OH-58C helicopters, and NASA Glenn test rigs, has shown that in-flight vibration data are typically non-stationary [l-4]. The nature and extent of this non-stationarity is most likely produced by several factors operating simultaneously. The aerodynamic flight environment and pilot commands provide continuously changing inputs, with a complex dynamic response that includes automatic feedback control from the engine regulator. It would appear that the combined effects operate primarily through an induced torque profile, which causes concomitant stress modulation at the individual internal gear meshes in the transmission. This notion is supported by several analyses, which show that upwards of 93% of the vibration signal s variance can be explained by knowledge of torque alone. That this relationship is stronger in an AH-1 than an OH-58, where measured non-stationarity is greater, suggests that the overall mass of the vehicle is an important consideration. In the lighter aircraft, the unsteady aerodynamic influences transmit relatively greater unsteady dynamic forces on the mechanical components, quite possibly contributing to its greater non-stationarity . In a recent paper using OH-58C pinion data [5], the authors have shown that in computing a time synchronous average (TSA) for various single-value metric computations, an effective trade-off can be obtained between sample size and measured stationarity by using data from only a single mesh cycle. A mesh cycle, which is defined as the number of rotations required for the gear teeth to return to their original mating position, has the property of representing all of the discrete phase angles of the opposing gears exactly once in the average. Measured stationarity is probably maximized because a single mesh cycle of the pinion gear occurs over a very short span of time, during which time-dependent non-stationary effects are kept to a minimum. Clearly, the advantage of local stationarity diminishes as the temporal duration of the cycle increases. This is most evident for a planetary mesh cycle, which can take several minutes to complete.
BATSE Observations of Gamma-Ray Burst Spectra. Part 3; Low-Energy Behavior of Time-Averaged Spectra
NASA Technical Reports Server (NTRS)
Preece, R. D.; Briggs, M. S.; Pendleton, G. N.; Paciesas, W. S.; Matteson, J. L.; Band, D. L.; Skelton, R. T.; Meegan, C. A.
1996-01-01
We analyze time-averaged spectra from 86 bright gamma-ray bursts from the first 5 years of the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma Ray Observatory to determine whether the lowest energy data are consistent with a standard spectra form fit to the data at all energies. The BATSE Spectroscopy Detectors have the capability to observe photons as low as 5 keV. Using the gamma-ray burst locations obtained with the BATSE Large Area Detectors, the Spectroscopy Detectors' low-energy response can be modeled accurately. This, together with a postlaunch calibration of the lowest energy Spectroscopy Detector discriminator channel, which can lie in the range 5-20 keV, allows spectral deconvolution over a broad energy range, approx. 5 keV to 2 MeV. The additional coverage allows us to search for evidence of excess emission, or for a deficit, below 20 keV. While no burst has a significant (greater than or equal to 3 sigma) deficit relative to a standard spectra model, we find that 12 bursts have excess low-energy emission, ranging between 1.2 and 5.8 times the model flux, that exceeds 5 sigma in significance. This is evidence for an additional low-energy spectral component in at least some bursts, or for deviations from the power-law spectral form typically used to model gamma-ray bursts at energies below 100 keV.
Wolthaus, J. W. H.; Sonke, J.-J.; Herk, M. van; Damen, E. M. F.
2008-09-15
Purpose: lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. Methods and Materials: 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Results: Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods <0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of 'shape differences' was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. Conclusions: We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
NASA Astrophysics Data System (ADS)
Dunn, M. G.; Kim, J.; Civinskas, K. C.; Boyle, R. J.
1992-06-01
Time-averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row and the first-stage blade row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the component. Stanton-number distributions are also reported for the second-stage vane at 50 percent span. A shock tube is used as a short-duration source of heated and pressurized air to which the turbine is subjected. Platinum thin-film gages are used to obtain the heat-flux measurements and miniature silicone-diaphragm pressure transducers are used to obtain the surface pressure measurements. The first-stage vane Stanton number distributions are compared with predictions obtained using a quasi-3D Navier-Stokes solution and a version of STAN5. This same N-S technique was also used to obtain predictions for the first blade and the second vane.
Lakämper, Stefan; Lamprecht, Andreas; Schaap, Iwan A T; Dual, Jurg
2015-01-01
Ultrasonic standing waves are increasingly applied in the manipulation and sorting of micrometer-sized particles in microfluidic cells. To optimize the performance of such devices, it is essential to know the exact forces that the particles experience in the acoustic wave. Although much progress has been made via analytical and numerical modeling, the reliability of these methods relies strongly on the assumptions used, e.g. the boundary conditions. Here, we have combined an acoustic flow cell with an optical laser trap to directly measure the force on a single spherical particle in two dimensions. While performing ultrasonic frequency scans, we measured the time-averaged forces on single particles that were moved with the laser trap through the microfluidic cell. The cell including piezoelectric transducers was modeled with finite element methods. We found that the experimentally obtained forces and the derived pressure fields confirm the predictions from theory and modeling. This novel approach can now be readily expanded to other particle, chamber, and fluid regimes and opens up the possibility of studying the effects of the presence of boundaries, acoustic streaming, and non-linear fluids. PMID:25370872
NASA Astrophysics Data System (ADS)
Ma, Z. J.; Lai, P. T.; Liu, Z. H.; Fleischer, S.; Cheng, Y. C.
1990-12-01
The threshold voltage (VT) degradation metal-oxide-semiconductor field-effect transistors (MOSFETs) with thermally nitrided oxide or pure oxide as gate dielectric was determined under Fowler-Nordheim (FN) stressing. A typical VT turnaround behavior was observed for both kinds of devices. The VT for nitrided oxide MOSFETs shifts more negatively than that for pure oxide MOSFETs during the initial period of FN stressing whereas the opposite is true for the positive shift after the critical time at turnaround point. The discovery that the shift of substrate current peak exhibits similar turnaround behavior reinforces the above results. In the meantime, the field-effect electron mobility and the maximum transconductance in the channel for nitrided oxide MOSFETs are only slightly degraded by stressing as compared to that for pure oxide MOSFETs. The VT turnaround behavior can be explained as follows: Net trapped charges in the oxide are initially positive (due to hole traps in the oxide) and result in the negative shift of VT. With increasing injection time, trapped electrons in the oxide as well as acceptortype interface states increase. This results in the positive shift in VT. It is revealed that VT degradation in MOSFETs is dominated by the generation of acceptortype interface states rather than electron trapping in the oxide after the critical time.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Kilburn, Tina R.; Eriksen, Hanne-Lise Falgreen; Underbjerg, Mette; Thorsen, Poul; Mortensen, Erik Lykke; LandrĂ¸, Nils Inge; Bakketeig, Leiv S.; Grove, Jakob; SvĂ¦rke, Claus; Kesmodel, Ulrik SchiĂ¸ler
2015-01-01
Background Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT) and information processing time (IPT) in young children. Method Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60â€“64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R) was administered. Results Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1â€“4. Conclusion This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring. PMID:26382068
NASA Astrophysics Data System (ADS)
Maglich, Bogdan C.
1988-08-01
A D + beam of kinetic energy Ti = 0.7 MeV was stored in a "simple mirror" magnetic field as self-colliding orbits or migma and neutralized by ambient, oscillating electrons whose bounce frequencies were externally controlled. Space charge density was exceeded by an order of magnitude without instabilities. Three nondestructive diagnostic methods allowed measurements of ion orbit distribution, ion storage times, ion energy distribution, nuclear reaction rate, and reaction product spectrum. Migma formed a disc 20 cm in diameter and 0.5 cm thick. Its ion density was sharply peaked in the center; the ion-to-electron temperature ratio was TiTe 10 3; ion-electron temperature equilibrium was never reached. The volume average and central D + density were n = 3.2 × 10 9 cm -3 and nc = 3 × 10 10 cm -3 respectively, compared to the space charge limit density nsc = 4 × 10 8 cm -3. The energy confinement time was ?c = 20-30 s, limited by the change exchange reactions with the residual gas in the vacuum (5 × 10 -9 Torr). The ion energy loss rate was 1.4 keV/s. None of the instabilities that were observed in mirrors at several orders of magnitude lower density occurred. The proton energy spectrum for dd + d ? T + p + 4 MeV shows that dd collided at an average crossing angle of 160°. Evidence for exponential density buildup has also been observed. Relative to Migma III results and measured in terms of the product of ion energy E, density n, and confinement time ?, device performance was improved by a factor of 500. Using the central fast ion density, we obtained the triple product: Tn? ? 4 × 10 14 keV s cm -3, which is greater than that of the best fusion devices. The luminosity (collision rate per unit cross section) was 10 29 cm -2s -1, with o.7 A ion current through the migma center. The stabilizing features of migma are: (1) large Larmor radius; (2) small canonical angular momentum; (3) short axial length z (disc shape); (4) nonadiabatic motions in r and z; (5) precession and enegy spread; (6) ambipolar potential; (7) radial density gradient;(8) large ion-to-electron temperature ratio; (9) metal walls in z; (10) suitability for external stabilization techniques (small volume, recurrent surfacing of ions); and particularly (11) diamagnetic well. Extrapolition of the results to reactor densities led to a D + 3He reactor "basic migma disc" 0.5 m in diameter and 0.25 m thick, suspended in a 10 T field, generating 1.5 MW(th). A 2-MW(e) power plant would consist of 3 such discs in a common volume. referred to as a triplet migmacell. Its specific power is projected to be 1 MW(e)/ton. A large power plant of any size would consist of a large number of migma discs in a common volume. The advantages f such modular systems are: (1) economy of mass production: capital cost per kW will be 25% that of fission and 10% that of DT fusion; (2) plants will be economical for all sizes above 10 kW(e); (3) minimal heat pollution, thanks to direct conversion of the changed ion kinetic energy into electricity; (4) no proliferative potential; and (5) large power-to-weight ratio due to absence of shielding. Anticipated physics problems in density increase are discussed.
School Turnarounds: Resisting the Hype, Giving Them Hope. Education Outlook No. 2
ERIC Educational Resources Information Center
Hess, Frederick M.; Gift, Thomas
2009-01-01
Education reformers are abuzz over school "turnarounds," a simple idea that has undeniable appeal. School turnarounds offer the opportunity to take familiar educational institutions and improve them through coaching, mentoring, capacity building, best practices, and other existing tools. Unlike most reform efforts, which focus on incremental…
School Turnaround Fever: The Paradoxes of a Historical Practice Promoted as a New Reform
ERIC Educational Resources Information Center
Peck, Craig; Reitzug, Ulrich C.
2014-01-01
School "turnaround" has received significant attention recently in education literature and policy action, especially as a means to dramatically improve urban education. In current common education usage, "turnaround" refers to the rapid, significant improvement in the academic achievement of persistently low-achieving schools.…
MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard
2016-01-01
Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 Â± 0.05 ml/s, 0.09 Â± 0.16 ml/s, and -0.02 Â± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600
NASA Astrophysics Data System (ADS)
Simond, G.; Bendahan, D.; Cozzone, P. J.
2001-03-01
We present an exact analytical method dedicated to fitting time-dependent exponential-like changes in MR spectra. As an illustration, this method has been applied to fitting metabolic changes recorded by 31P MRS in human skeletal muscle occurring during a rest-exercise-recovery protocol. When recording metabolic changes with the accumulative method, the time averaging of the MR signals implies the choice of a time index for fitting any changes in the features of the associated MR spectra. A critical examination of the different ways (constant, linear, and exponential) of choosing the time index is reported. By numerical analysis, we have calculated the errors generated by the three methods and we have compared their sensitivity to noise. In the case of skeletal muscle, both constant and linear methods introduce large and uncontrolled errors for the whole set of metabolic parameters derived from [PCr] changes. In contrast, the exponential method affords a reliable estimation of critical parameters in muscle bioenergetics in both normal and pathological situations. This method is very easy to implement and provides an exact analytical solution to fitting changes in MR spectra recorded by the accumulative method.
Shih, H C; Tsai, S W; Kuo, C H
2012-01-01
A solid-phase microextraction (SPME) device was used as a diffusive sampler for airborne propylene glycol ethers (PGEs), including propylene glycol monomethyl ether (PGME), propylene glycol monomethyl ether acetate (PGMEA), and dipropylene glycol monomethyl ether (DPGME). Carboxen-polydimethylsiloxane (CAR/PDMS) SPME fiber was selected for this study. A polytetrafluoroethylene (PTFE) tubing was used as the holder, and the SPME fiber assembly was inserted into the tubing as a diffusive sampler. The diffusion path length and area of the sampler were 0.3 cm and 0.00086 cm(2), respectively. The theoretical sampling constants at 30°C and 1 atm for PGME, PGMEA, and DPGME were 1.50 × 10(-2), 1.23 × 10(-2) and 1.14 × 10(-2) cm(3) min(-1), respectively. For evaluations, known concentrations of PGEs around the threshold limit values/time-weighted average with specific relative humidities (10% and 80%) were generated both by the air bag method and the dynamic generation system, while 15, 30, 60, 120, and 240 min were selected as the time periods for vapor exposures. Comparisons of the SPME diffusive sampling method to Occupational Safety and Health Administration (OSHA) organic Method 99 were performed side-by-side in an exposure chamber at 30°C for PGME. A gas chromatography/flame ionization detector (GC/FID) was used for sample analysis. The experimental sampling constants of the sampler at 30°C were (6.93 ± 0.12) × 10(-1), (4.72 ± 0.03) × 10(-1), and (3.29 ± 0.20) × 10(-1) cm(3) min(-1) for PGME, PGMEA, and DPGME, respectively. The adsorption of chemicals on the stainless steel needle of the SPME fiber was suspected to be one of the reasons why significant differences between theoretical and experimental sampling rates were observed. Correlations between the results for PGME from both SPME device and OSHA organic Method 99 were linear (r = 0.9984) and consistent (slope = 0.97 ± 0.03). Face velocity (0-0.18 m/s) also proved to have no effects on the sampler. However, the effects of temperature and humidity have been observed. Therefore, adjustments of experimental sampling constants at different environmental conditions will be necessary. PMID:22651222
Muńoz-García, Juan Carlos; Corzana, Francisco; de Paz, José L; Angulo, Jesús; Nieto, Pedro M
2013-11-01
The polyconformational behavior of L-iduronic acid (L-IdoA2S) in heparin derivatives has been previously analyzed in terms of intra-ring proton-proton vicinal coupling constants ((3)JHH) through mathematical fit of experimental and theoretical values (Ferro DR, Provasoli A, Ragazzi M, Casu B, Torri G, Bossennec V, Perly B, Sinay P, Petitou M, Choay J. 1990. Conformer Populations of L-Iduronic Acid Residues in Glycosaminoglycan Sequences. Carbohydr Res. 195:157-167; Muńoz-García JC, López-Prados J, Angulo J, Díaz-Contreras I, Reichardt N, de Paz JL, Martín-Lomas M, Nieto PM. 2012. Effect of the substituents of the neighboring ring in the conformational equilibrium of iduronate in heparin-like trisaccharides. Chemistry. 18:16319-16331.). However, this methodology is subjected to the experimental uncertainties of the J-coupling measurements, the force field deviations and the goodness of the least-squares fit. In the present work, we have used time-averaged distance restrained molecular dynamics (tar-MD) to largely reduce these errors, which enables accurate quantification of the population of conformers, or puckers, of the L-IdoA2S residue, in a set of eight heparin-like trisaccharides following the general sequence d-glucosamine (GlcN)-IdoA-GlcN, directly from the time evolution of the puckering coordinates ? and . Thus, by carrying out tar-MD simulations in explicit water, with the exclusive nuclear overhauser enhancement (NOE)-derived distance H2-H5 of the L-IdoA2S (2)SO conformer as the unique imposed constraint, we have been able to accurately and easily determine the different extents to which the iduronate ring populates the polar ((1)C4 chair) and equatorial ((2)SO skew-boat) areas of the puckering sphere depending on the sulfation pattern of the flanking GlcN residues, under low temperature conditions (278 K). The results indicate that 6-O-sulfation at the reducing-end GlcN residue facilitates the (1)C4 to (2)SO transitions of the iduronate ring by augmenting the flexibility of the C2-C3 torsion, driving the conformational equilibrium toward a majority of equatorial conformers. PMID:23903025
NASA Astrophysics Data System (ADS)
Yu, C.; Zinniker, D. A.; Moldowan, J.
2010-12-01
Urban air pollution is an ongoing and complicated problem for both residents and policy makers. This study aims to provide a better understanding of the geographic source and fate of organic pollutants in a dynamic urban environment. Natural and artificial hydrophobic substrates were employed for the passive monitoring and mapping of ground-level organic pollutants in the San Francisco Bay area. We focused specifically on volatile and semi-volatile polycyclic aromatic hydrocarbons (PAHs). These compounds are proxies for a broad range of combustion related air pollutants derived from local, regional, and global combustion sources. PAHs include several well-studied carcinogens and can be measured easily and accurately across a broad range of concentrations. Estimates of time-integrated vapor phase and particle deposition were made from measuring accumulated PAHs in the leaves of several widely distributed tree species (including the Quercus agrifolia and Sequoia sempervirens) and an artificial wax film. Samples were designed to represent pollutant exposure over a period of one to several months. The selective sampling and analysis of hydrophobic substrates providess insight into the average geographic distribution of ground-level air pollutants in a simple and inexpensive way. However, accumulated organics do not directly correlated with human exposure and the source signature of PAHs may be obscured by transport, deposition, and flux processes. We attempted to address some of these complications by studying 1) PAH accumulation rates within substrates in a controlled microcosm, 2) differences in PAH abundance in different substrate types at the same locality, and 3) samples near long-term high volume air sampling stations. We also set out to create a map of PAH concentrations based on our measurements. This map can be directly compared with interpolated data from high-volume sampling stations and used to address questions concerning atmospheric heterogeneity of these pollutants (i.e. due to both source localization and dominant wind patterns). Our initial results indicate that exposure to PAHs in the bay area is geographically heterogeneous and individual exposure may vary by more than two orders of magnitude. The signatures of PAH contamination also varies considerably, indicating different sources and differing transportation mechanisms may be important at different sites and times.
NASA Astrophysics Data System (ADS)
Morgan, H. A.; Harris, A. J.; Rose, W. I.
2011-12-01
The Pacaya volcanic complex has been producing lava flows nearly continuously since 1961. Matías (2009) compiled a detailed database including information such as length, surface area, volume, duration, and effusion rates for each of the 248 lava flows that occurred during this time. In this investigation, time-averaged discharge rates (TADR) were estimated for a subset of lava flows using a satellite-based method initially applied to infrared satellite data for Etna by Harris et al. (1997). Satellite-based estimates potentially provide a quicker, safer, and less expensive alternative to ground-based measurements and are therefore valuable for hazard mitigation. The excellent record of recent activity at Pacaya provides a unique opportunity to calibrate results from the satellite-based method by comparing them with reliable ground-based measurements. Imagery from two sensors of differing temporal and spatial resolutions were analyzed in order to produce a comprehensive dataset: MODIS (one image every 6 hours, 1-km pixels) and GOES (one image every 15 minutes, 4-km pixels). As of August 2011, 2403 MODIS and 2642 GOES images have been analyzed. Due to the relatively low intensity of Pacaya's effusive activity, each image was searched manually for volcanic "hot spots". It was found that MODIS data allowed better estimations of TADR than did GOES data. We suggested that the very small, sub-resolution flows typical of Pacaya may have surpassed the limits of low-resolution GOES imagery for this particular application. TADR derived from MODIS data were used to describe and parameterize eruptive cycles, as well as to explore conduit models. A pattern was found over the past two decades of short high-TADR periods followed by longer low-TADR periods. We suggested that the low TADR experienced during longer "bleeding" of the conduit may approximate the magma supply rate to the shallow system, while high TADR eruptions may represent the release of volumes collected during phases of non-eruptive degassing. The supply rate was estimated as approximately 6777 m3/day. Since Pacaya is a continuously degassing, open-vent volcano, the supply rate to the conduit can also be estimated via sulfur dioxide gas measurements. Developing results will be discussed.
Li, Shen-Heng; Xie, Jian-Teng; Long, Hai-Bo; Zhang, Jun; Zhou, Wei-Dong; Niu, Hong-Xin; Tang, Xun; Feng, Zhong-Lin; Ye, Zhi-Ming; Zuo, Yang-Yang; Fu, Lei; Wen, Feng; Wang, Li-Ping; Wang, Wen-Jian; Shi, Wei
2015-01-01
The time-averaged serum potassium was more comprehensive to reflect the all-time changes of serum potassium levels during peritoneal dialysis (PD). However, the association of fluctuation of time-averaged serum potassium level with long-time survival of PD patients remains unknown. In this retrospective study, we included 357 incident PD patients in 2 centers from January 1, 2007 to October 31, 2012 with follow-up through October 31, 2014. Our data demonstrated that it was the lower time-averaged serum potassium level rather than baseline of serum potassium level that was associated with high risk of death. Patients with higher standard deviation (SD) had significantly poorer all-cause (p?=?0.016) and cardiovascular mortality (p?=?0.041). Among the patients with time-averaged serum potassium levels below 4.0 mEq/L, a lower mean value was more important than its SD to predict death risk. In contrast, the patients with time-averaged serum potassium levels above 4.0?mEq/L, those with serum potassium SD?time-averaged serum potassium and its fluctuation contributed disproportionately to the high death risk in PD patients. PMID:26507157
Autonomous Robotic Refueling System (ARRS) for rapid aircraft turnaround
NASA Astrophysics Data System (ADS)
Williams, O. R.; Jackson, E.; Rueb, K.; Thompson, B.; Powell, K.
An autonomous robotic refuelling system is being developed to achieve rapid aircraft turnaround, notably during combat operations. The proposed system includes a gantry positioner with sufficient reach to position a robotic arm that performs the refuelling tasks; a six degree of freedom manipulator equipped with a remote center of compliance, torque sensor, and a gripper that can handle standard tools; a computer vision system to locate and guide the refuelling nozzle, inspect the nozzle, and avoid collisions; and an operator interface with video and graphics display. The control system software will include components designed for trajectory planning and generation, collision detection, sensor interfacing, sensory processing, and human interfacing. The robotic system will be designed so that upgrading to perform additional tasks will be relatively straightforward.
ERIC Educational Resources Information Center
Player, Daniel; Hambrick Hitt, Dallas; Robinson, William
2014-01-01
This guide provides state education agencies (SEAs) and districts (LEAs) with guidance about how to assess the district's readiness to support school turnaround initiatives. Often, school turnaround efforts focus only on the school's structure and leadership. Rarely do policymakers or practitioners think about school turnaround as a system-level…
NASA Astrophysics Data System (ADS)
Liu, Yang-Wei; Liu, Bao-Jie; Lu, Li-Peng
2012-04-01
The average-passage equation system (APES) provides a rigorous mathematical framework for accounting for the unsteady blade row interaction through multistage compressors in steady state environment by introducing deterministic correlations (DC) that need to be modeled to close the equation system. The primary purpose of this study was to provide insight into the DC characteristics and the influence of DC on the time-averaged flow field of the APES. In Part 2 of this two-part paper, the influence of DC on the time-averaged flow field was systematically studied. Several time-averaging computations were conducted with various boundary conditions and DC for the downstream stator in a transonic compressor stage, by employing the CFD solver developed in Part 1 of this two-part paper. These results were compared with the time-averaged unsteady flow field and the steady one. The study indicated that the circumferential-averaged DC can take into account major part of the unsteady effects on spanwise redistribution of flow fields in compressors. Furthermore, it demonstrated that both deterministic stresses and deterministic enthalpy fluxes are necessary to reproduce the time-averaged flow field.
High-pressure rocket engine turnaround duct computational fluid dynamics analysis
NASA Astrophysics Data System (ADS)
Cox, G. B., Jr.
1984-10-01
Current and projected high-pressure, staged-combustion rocket engine designs, such as the XLR129 and space shuttle main engine (SSME), introduced a type of turnaround duct flowpath wherein the turnaround is accompanied by an increase in mean radius and a decrease in fluid velocity through a subsequent diffuser. This turnaround duct flowpath is needed in the rocket engine to conduct high-pressure turbopump turbine discharge gases toward the main injector while maintaining a compact turbomachinery and powerhead flowpath. Current experience in the SSME shows that this type of turnaround duct can experience flow separation with an attendant increase in flow turbulence levels. Advances in computational fluid dynamics techniques over the last decade produced a number of codes which are applicable to the analysis and prediction of flow field behavior within such ducts. A version of the TEACH code developed at Pratt & Whitney Engineering Division was used to analyze three representative turnaround duct configurations. The study included the effects of fluid properties, inlet velocity flowfields, solution grid mesh size, and numerical solution algorithm upon calculated flow. Results include flow streamline comparisons, velocity profiles, and calculated total pressure losses. The results showed that turnaround ducts of the type studied can exhibit separation but that appropriately designed turnaround ducts can be made to function without suffering from internal flow separation.
Wang, Huapei; Kent, Dennis V; Rochette, Pierre
2015-12-01
The geomagnetic field is predominantly dipolar today, and high-fidelity paleomagnetic mean directions from all over the globe strongly support the geocentric axial dipole (GAD) hypothesis for the past few million years. However, the bulk of paleointensity data fails to coincide with the axial dipole prediction of a factor-of-2 equator-to-pole increase in mean field strength, leaving the core dynamo process an enigma. Here, we obtain a multidomain-corrected Pliocene-Pleistocene average paleointensity of 21.6 Â± 11.0 ÂµT recorded by 27 lava flows from the Galapagos Archipelago near the Equator. Our new result in conjunction with a published comprehensive study of single-domain-behaved paleointensities from Antarctica (33.4 Â± 13.9 ÂµT) that also correspond to GAD directions suggests that the overall average paleomagnetic field over the past few million years has indeed been dominantly dipolar in intensity yet only âĽ 60% of the present-day field strength, with a long-term average virtual axial dipole magnetic moment of the Earth of only 4.9 Â± 2.4 Ă— 10(22) A â‹… m(2). PMID:26598664
Rapid Response Risk Assessment Turnaround System (R3ATS)
Redus, K.; Escher, R.
2008-07-01
The Rapid Response Risk Assessment Turnaround System (R3ATS) is a decision support system that can be used for cost and schedule risk assessment as prescribed by the US Department of Energy (DOE) Order 413.3A, Program and Project Management for the Acquisition of Capital Assets. Unlike complex and training-intensive project control and accounting risk systems - or naive and statistically incorrect risk assessment approaches - employed throughout the DOE complex R3ATS is a powerful and yet simple decision support system for conducting project risk assessments. Output from R3ATS include: (1) establishing a project risk register which can be periodically updated to regularly monitor and assess a dynamic risk picture, (2) producing statistically derived and justifiable cost and schedule contingency probability density functions, and (3) inclusion, via Bayesian updating, of significant trigger events that result in project cost, schedule or technical risk events. During FY2007, R3ATS was used by the Oak Ridge Transuranic (TRU) Waste Processing Center (TWPC) project to examine the FY 2008 through FY 2009 budget ({approx}$70 million) and the life cycle budget (over $500 million). In less than eight weeks, Oak Ridge DOE personnel and TWPC project management were trained on - and subsequently incorporated - the R3ATS approach thus demonstrating its significance as a viable and rapid turn-around decision support tool for cost and schedule risk assessment. In conclusion: The R3ATS was developed to meet the mission need for a defensible risk-oriented decision support system. We have demonstrated how R3ATS meets DOE ORO budget planning efforts associated with the TWPC life cycle from FY 2008 through FY 2018. We have further identified the critical shortcomings in the entire approach that DOE takes perform risk management. When we realized that DOE guidance for risk management is naive and can easily produce results are not reliable or repeatable, we applied a defensible systems engineering approach to develop R3ATS. The TWPC approach to risk management offers a risk-balancing approach for performance management and ensures that future investment in training enables all users to use risk management as a tool for successful TWPC mission accomplishment. (authors)
NASA Astrophysics Data System (ADS)
Vandenrijt, Jean-François; Thizy, Cédric; Georges, Marc P.
2015-08-01
We present investigation of interferometric methods for vibration mode identification to be applied on shakers in industrial environment. We consider long wave infrared ESPI in time-averaged mode with the use of phase-stepping which allows transforming Bessel fringes, typical to the time averaging, into phase values which provide a better way for identification of vibration mode shapes. The use of long laser wavelength allows measuring larger amplitudes of vibrations compared to what is achieved in visible light. Also longer wavelengths allow lower sensitivity to external perturbations. Time-averaged phase-stepped shearography in visible is also used as a compared alternative to LWIR ESPI for working in industrial environments. Although not demonstrated here, an interesting feature for the future investigations on LWIR ESPI is its potentiality of simultaneous temperature measurement during vibrations.
Rapid Turnaround of Costing/Designing of Space Missions Operations
NASA Technical Reports Server (NTRS)
Kudrle, Paul D.; Welz, Gregory A.; Basilio, Eleanor
2008-01-01
The Ground Segment Team (GST), at NASA's Jet Propulsion Laboratory in Pasadena, California, provides high-level mission operations concepts and cost estimates for projects that are in the formulation phase. GST has developed a tool to track costs, assumptions, and mission requirements, and to rapidly turnaround estimates for mission operations, ground data systems, and tracking for deep space and near Earth missions. Estimates that would often take several weeks to generate are now generated in minutes through the use of an integrated suite of cost models. The models were developed through interviews with domain experts in areas of Mission Operations, including but not limited to: systems engineering, payload operations, tracking resources, mission planning, navigation, telemetry and command, and ground network infrastructure. Data collected during interviews were converted into parametric cost models and integrated into one tool suite. The tool has been used on a wide range of missions from small Earth orbiters, to flagship missions like Cassini. The tool is an aid to project managers and mission planners as they consider different scenarios during the proposal and early development stages of their missions. The tool is also used for gathering cost related requirements and assumptions and for conducting integrated analysis of multiple missions.
NASA Astrophysics Data System (ADS)
Jarochowska, Emilia; Munnecke, Axel
2015-01-01
Stable carbon isotope curves are used as a precise stratigraphic tool in the Paleozoic, even though they are commonly based on shallow-water carbonate record, characterized by low stratigraphic completeness. Identification of episodes of large-scale redeposition and erosion may improve Î´13Ccarb-based correlations. Here, a series of at least three episodes of high-energy onshore redeposition are described from the Makarivka Member (new unit) of the Ustya Formation from the Homerian (middle Silurian) of Podolia, Ukraine. The Makarivka Member is emplaced within a tidal flat succession. Its most prominent part is divided into a lower polymictic conglomerate of sand- to boulder-sized clasts representing a range of subtidal facies, and an upper heterolithic unit composed of grainstone and mudstone laminae. The aim of the study is to identify the mechanism of deposition of the allochthonous conglomeratic material in this Member. Based on analogies with recent tsunami deposits, the conglomerate is interpreted to reflect the strongest landward-directed current in the tsunami run-up phase, and the heterolith - alternating high-density landward currents, stagnant intervals allowing mud and land-derived debris to settle, and backwash flows. The tsunamite was deposited during an interval of decreasing isotopic values of the Mulde excursion, a global Î´13C excursion reaching + 5.2â€° in the studied sections. Clast redeposition in an interval characterized by rapidly changing Î´13Ccarb offers the opportunity to evaluate the degree of temporal and spatial averaging caused by the tsunami. The clasts in the polymictic conglomerate show scattered Î´13Ccarb values (- 0.3â€° to + 2.1â€°) compared to homogenous (1.3â€° to 1.6â€°) values in the matrix. The presence of clasts characterized by low Î´13Ccarb values is explained by their decrease with bathymetry rather than erosion of pre-excursion strata, whereas high values characterize material entrained from the sea-floor and strata directly underlying the tsunamite. Close (1.3â€° and 1.5â€°) average Î´13Ccarb values suggest that the matrix of the conglomerate is potentially a product of clast grinding.
Mori, T; Binder, B; Johnson, C H
1996-01-01
To ascertain whether the circadian oscillator in the prokaryotic cyanobacterium Synechococcus PCC 7942 regulates the timing of cell division in rapidly growing cultures, we measured the rate of cell division, DNA content, cell size, and gene expression (monitored by luminescence of the PpsbAI::luxAB reporter) in cultures that were continuously diluted to maintain an approximately equal cell density. We found that populations dividing at rates as rapid as once per 10 h manifest circadian gating of cell division, since phases in which cell division slows or stops recur with a circadian periodicity. The data clearly show that Synechococcus cells growing with doubling times that are considerably faster than once per 24 h nonetheless express robust circadian rhythms of cell division and gene expression. Apparently Synechococcus cells are able to simultaneously sustain two timing circuits that express significantly different periods. Images Fig. 1 Fig. 5 PMID:8816773
NASA Astrophysics Data System (ADS)
Prevosto, L.; Kelly, H.; Mancinelli, B.
2013-12-01
This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.
NASA Astrophysics Data System (ADS)
Kretzschmar, J. G.; Mertens, I.
Over a period of three consecutive years (1977-1979) hourly meteorological measurements at the Nuclear Energy Research Centre (SCK/CEN) Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d and 26 d average ground-level concentrations at receptors respectively at 500 m, l km, 2 km, 4 km and 8 km from a continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numerical values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for groundlevel releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set was derived (theoretically, experimentally and/or empirically), it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.
ERIC Educational Resources Information Center
Smith, Scott G.
2015-01-01
In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to two…
ERIC Educational Resources Information Center
Smith, Scott G.
2015-01-01
In this article, Scott Smith presents an innocent problem (Problem 12 of the May 2001 Calendar from "Mathematics Teacher" ("MT" May 2001, vol. 94, no. 5, p. 384) that was transformed by several timely "what if?" questions into a rewarding investigation of some interesting mathematics. These investigations led to twoâ€¦
Prescot, Andrew P.; Richards, Todd; Dager, Stephen R.; Choi, Changho; Renshaw, Perry F.
2015-01-01
1H MRS investigations have reported altered glutamatergic neurotransmission in a variety of psychiatric disorders. The unraveling of glutamate from glutamine resonances is crucial for the interpretation of these observations, although this remains a challenge at clinical static magnetic field strengths. Glutamate resolution can be improved through an approach known as echo time (TE) averaging, which involves the acquisition and subsequent averaging of multiple TE steps. The process of TE averaging retains the central component of the glutamate methylene multiplet at 2.35 ppm, with the simultaneous attenuation of overlapping phase-modulated coupled resonances of glutamine and N-acetylaspartate. We have developed a novel post-processing approach, termed phase-adjusted echo time (PATE) averaging, for the retrieval of glutamine signals from a TE-averaged 1H MRS dataset. The method works by the application of an optimal TE-specific phase term, which is derived from spectral simulation, prior to averaging over TE space. The simulation procedures and preliminary in vivo spectra acquired from the human frontal lobe at 2.89 T are presented. Three metabolite normalization schemes were developed to evaluate the frontal lobe test–retest reliability for glutamine measurement in six subjects, and the resulting values were comparable with previous reports for within-subject (9–14%) and inter-subject (14–20%) measures. Using the acquisition parameters and TE range described, glutamine quantification is possible in approximately 10 min. The post-processing methods described can also be applied retrospectively to extract glutamine and glutamate levels from previously acquired TE-averaged 1H MRS datasets. PMID:22407923
Vandesompele, Jo; De Preter, Katleen; Pattyn, Filip; Poppe, Bruce; Van Roy, Nadine; De Paepe, Anne; Speleman, Frank
2002-01-01
Background Gene-expression analysis is increasingly important in biological research, with real-time reverse transcription PCR (RT-PCR) becoming the method of choice for high-throughput and accurate expression profiling of selected genes. Given the increased sensitivity, reproducibility and large dynamic range of this methodology, the requirements for a proper internal control gene for normalization have become increasingly stringent. Although housekeeping gene expression has been reported to vary considerably, no systematic survey has properly determined the errors related to the common practice of using only one control gene, nor presented an adequate way of working around this problem. Results We outline a robust and innovative strategy to identify the most stably expressed control genes in a given set of tissues, and to determine the minimum number of genes required to calculate a reliable normalization factor. We have evaluated ten housekeeping genes from different abundance and functional classes in various human tissues, and demonstrated that the conventional use of a single gene for normalization leads to relatively large errors in a significant proportion of samples tested. The geometric mean of multiple carefully selected housekeeping genes was validated as an accurate normalization factor by analyzing publicly available microarray data. Conclusions The normalization strategy presented here is a prerequisite for accurate RT-PCR expression profiling, which, among other things, opens up the possibility of studying the biological relevance of small expression differences. PMID:12184808
Shaffer, F.; Massah, H.; Sinclair, J.; Shahnam, M.
1995-04-30
This paper describes an experimental facility to measure time-averaged properties of particle-wall collisions. A particle tracking velocimetry (PTV) system measures particle rebound angles and velocities and impact angles and velocities for specific particles. From these measured properties, values of normal and tangential coefficient of restitution are derived. The PTV system rapidly measures thousands of particle-wall collisions enabling determination of time-averaged properties. In this initial work we use 191 {micro}m glass particles colliding with a glass plate at a fixed angle of 17.5{degree} and a velocity of 17.07 m/s. Observations are also made of aspherical particles colliding with gross sliding. In the future, this experimental system will be used with particles and wall materials of industrial significance.
NASA Astrophysics Data System (ADS)
Kidwell, Susan M.; Best, Mairi M. R.; Kaufman, Darrell S.
2005-09-01
Radiocarbon-calibrated amino-acid racemization ages of individually dated bivalve mollusk shells from Caribbean reef, nonreefal carbonate, and siliciclastic sediments in Panama indicate that siliciclastic sands and muds contain significantly older shells (median 375 yr, range up to Ëś5400 yr) than nearby carbonate seafloors (median 72 yr, range up to Ëś2900 yr; maximum shell ages differ significantly at p < 0.02 using extreme-value statistics). The implied difference in shell loss rates is contrary to physicochemical expectations but is consistent with observed differences in shell condition (greater bioerosion and dissolution in carbonates). Higher rates of shell loss in carbonate sediments should lead to greater compositional bias in surviving skeletal material, resulting in taphonomic trade-offs: less time averaging but probably higher taxonomic bias in pure carbonate sediments, and lower bias but greater time averaging in siliciclastic sediments from humid-weathered accretionary arc terrains, which are a widespread setting of tropical sedimentation.
A Rapid Turnaround Two-Stage Adiabatic Demagnetization Refrigerator for Cooling to 50 mK
NASA Astrophysics Data System (ADS)
Shirron, P. J.; DiPirro, M. J.; Panek, J. S.; Francis, J. J.; Warner, B. A.; Jackson, M. L.
2006-04-01
Many research and development programs require rapid access to very low temperatures (50 mK). For detector development, relatively large experiment volumes are also needed for tests involving integrated detectors and readout amplifiers (which may need to be stationed at a different temperature). To provide this capability in a versatile, fast turnaround system, we have constructed a two-stage adiabatic demagnetization refrigerator (ADR) that is operated in a simple bucket-style dewar. The ADR/cryostat is separated into two concentric inserts; the outer insert supports two 3.3 T magnets, magnet leads and magnetic shielding, and the inner one consists of a hermetic experiment volume coupled to the ADR's salt pills and heat switches. The magnet insert remains in the dewar at all times, while the ADR insert may be inserted and removed even when the dewar is cold. The cooldown from room temperature takes less than 1.5 hours, and cycling of the ADR as little as 30 minutes. Future tests will investigate the use of neon exchange gas to accelerate cooling of the ADR and more thermally isolated components such as wiring and structural supports. Design and operation of the system are discussed.
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Hulot, G.; Johnson, C. L.
2013-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.
Mineo, Placido; Vitalini, Daniele; Scamporrino, Emilio; Bazzano, Sebastiano; Alicata, Rossana
2005-01-01
The dependence of the calculated average molecular mass of a polyethylene glycol with a large polydispersity on the instrumental parameters adopted in the acquisition of mass spectra using delayed extraction matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (DE MALDI-TOFMS) was investigated. It has been shown that a combined effect of delay times and potential gradients can act on the ion cloud in the source chamber affecting both mass resolution and average molecular mass value of the analyzed polymeric sample. Also examined was a blend of two different polymers (a PEG and a PMMA commercial sample having a similar average molecular mass), which presents an additional problem concerning the discrimination among the different polymer species as a function of the experimental conditions. In this work, the best instrumental conditions to obtain both good resolution and a correct average molecular mass for the examined polydisperse sample are reported. PMID:16134231
Holyoak, Marcel; Meese, Robert J.; Graves, Emily E.
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005â€“2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we employ should be useful in other metapopulation studies for calculating time-averaged reproductive output for different sites. PMID:24817307
ERIC Educational Resources Information Center
Schaffer, Eugene; Reynolds, David; Stringfield, Sam
2012-01-01
Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests theâ€¦
ERIC Educational Resources Information Center
Murphy, Joseph
2008-01-01
Purpose: The purpose of this paper is to review the literature from the organizational sciences to develop a grounded narrative of turnaround leadership. Design/methodology/approach: The paper is a review of literature, which employs a ten-step process to explore and make sense of the turnaround literature from the organizational sciences. The…
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire
2015-01-01
School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of the…
ERIC Educational Resources Information Center
Cucchiara, Maia Bloomfield; Rooney, Erin; Robertson-Kraft, Claire
2015-01-01
School turnaround--a reform strategy that strives for quick and dramatic transformation of low-performing schools--has gained prominence in recent years. This study uses interviews and focus groups conducted with 86 teachers in 13 schools during the early stages of school turnaround in a large urban district to examine teachers' perceptions of theâ€¦
ERIC Educational Resources Information Center
MacTaggart, Terrence, Ed.
2007-01-01
This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the trueâ€¦
ERIC Educational Resources Information Center
Schaffer, Eugene; Reynolds, David; Stringfield, Sam
2012-01-01
Beginning from 1 high-poverty, historically low-achieving secondary school's successful turnaround work, this article provides data relative to a successful school turnaround, the importance of external and system-level supports, and the importance of building for sustainable institutionalization of improvements. The evidence suggests the…
ERIC Educational Resources Information Center
MacTaggart, Terrence, Ed.
2007-01-01
This book discusses the early indicators of a college or university's need for a turnaround. It outlines financial trends and other indicators of distress, as well as benchmarks for the various stages of an effective turnaround strategy. The book will help trustees, presidents, and faculty members diagnose whether they are in denial about the true…
NASA Astrophysics Data System (ADS)
Kemaneci, Efe; Carbone, Emile; Jimenez-Diaz, Manuel; Graef, Wouter; Rahimi, Sara; van Dijk, Jan; Kroesen, Gerrit
2015-11-01
A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description.
Wick, Marius C; Grundtman, Cecilia; Weiss, Rüdiger J; Gruber, Johann; Kastlunger, Martin; Jaschke, Werner; Klauser, Andrea S
2012-07-01
A method to estimate the individual ankylosing spondylitis (AS) patient radiological progression of semi-quantitative magnetic resonance imaging (MRI) changes in the sacroiliac joints has not been described yet, which this study examines. Inflammatory disease activity and MRIs of the sacroiliac joints of 38 patients with recent onset established AS were analyzed at baseline and during follow-up. Sacroiliac MRIs were semi-quantitatively assessed using a modification of the "Spondylarthritis Research Consortium of Canada" (SPARCC) method. In each patient, the annual inflammatory disease activity was estimated by the time-averaged C-reactive protein (CRP; mg/l), calculated as the area under the curve. The mean (SD) CRP decreased from 1.3 (1.8) at baseline to 0.5 (0.6) at follow-up MRI (p < 0.04), which has been performed after a mean (SD) disease course of 2.8 (1.5) years. The mean (SD) annual increase (?) of SPARCC score from baseline to follow-up MRI was 0.4 (0.4). Baseline individual SPARCC sub-score for bone marrow edema did not statistically significantly correlate with individual ?SPARCC sub-score for erosions (p = N.S.). The individual AS patient correlation between annual time-averaged inflammatory disease activity and each annual ?SPARCC sub-scores was only statistically significant for erosions (p < 0.01; r = 0.71). Our results show that bone marrow edema and contrast-medium enhancement at baseline do not relate to the progression of erosions but the calculation of the individual patient annual time-averaged inflammatory disease activity allows to estimate the annual progression of erosions in sacroiliac MRIs of patients with AS. PMID:22422197
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
NASA Technical Reports Server (NTRS)
Kim, J. H.; Hudson, R. D.; Thompson, A. M.
1996-01-01
Error analysis of archived total 03 from total ozone mapping spectrometer (TOMS) (version 6) presented. Daily total 03 maps for the tropics, from the period October 6-21, 1992, are derived from TOMS radiances following correction for these errors. These daily maps, averaged together, show a wavelike feature, which is observed in all latitude bands, underlying sharp peaks which occur at different longitudes depending on the latitude. The wave pattern is used to derive both time-averaged stratospheric and tropospheric 03 fields. The nature of the wave pattern (stratospheric or tropospheric) cannot be determined with certainty due to missing data (no Pacific sondes, no lower stratospheric Stratospheric Aerosol and Gas Experiment (SAGE) ozone for 18 months after the Mt. Pinatubo eruption) and significant uncertainties in the corroborative satellite record in the lower stratosphere (solar backscattered ultraviolet (SBUV), microwave limb sounder (MLS)). However, the time- averaged tropospheric ozone field, based on the assumption that the wave feature is stratospheric, agrees within 10% with ultraviolet differential absorption laser Transport and Atmospheric Chemistry near the Equator-Atlantic) (TRACE A) 03 measurements from the DC-8 and with ozonesonde measurements over Brazzaville, Congo, Ascension Island, and Natal, Brazil, for the period October 6-21, 1992. The derived background (nonpolluted) Indian Ocean tropospheric ozone amount, 26 Dobson units (DU), agrees with the cleanest African ozonesonde profiles for September-October 1992. The assumption of a totally tropospheric wave (flat stratosphere) gives 38 DU above the western Indian Ocean and 15-40% disagreements with the sondes. Tropospheric column 03 is high from South America to Africa, owing to interaction of dynamics with biomass burning emissions. Comparison with fire distributions from advanced very high resolution radiometer (AVHHR) during October 1992 suggests that tropospheric 03 produced from biomass burning in South America and Africa dominates the 03 budget in the tropical southern hemisphere during the study period.
Achieving Exact and Constant Turnaround Ratio in a DDS-Based Coherent Transponder
NASA Technical Reports Server (NTRS)
D'Addario, Larry R.
2011-01-01
A report describes a non-standard direct digital synthesizer (DDS) implementation that can be used as part of a coherent transponder so as to allow any rational turnaround ratio to be exactly achieved and maintained while the received frequency varies. (A coherent transponder is a receiver-transmitter in which the transmitted carrier is locked to a pre-determined multiple of the received carrier's frequency and phase. That multiple is called the turnaround ratio.) The report also describes a general model for coherent transponders that are partly digital. A partially digital transponder is one in which analog signal processing is used to convert the signals between high frequencies at which they are radiated and relatively low frequencies at which they are converted to or from digital form, with most of the complex processing performed digitally. There is a variety of possible architectures for such a transponder, and different ones can be selected by choosing different parameter values in the general model. Such a transponder uses a DDS to create a low-frequency quasi-sinusoidal signal that tracks the received carrier s phase, and another DDS to generate an IF or near-baseband version of the transmitted carrier. With conventional DDS implementations, a given turnaround ratio can be achieved only approximately, and the error varies slightly as the received frequency changes. The non-conventional implementation employed here allows any rational turnaround ratio to be exactly maintained.
ERIC Educational Resources Information Center
Stuit, David A.
2010-01-01
This study investigates the successes of the charter and district sectors in eliminating bad schools via dramatic turnarounds in performance and/or shutdowns. It identified 2,025 low-performing charter and district schools across ten states, each of which is home to a sizable number of charter schools. These particular schools were tracked from…
Turnaround radius in an accelerated universe with quasi-local mass
NASA Astrophysics Data System (ADS)
Faraoni, Valerio; Lapierre-Léonard, Marianne; Prain, Angus
2015-10-01
We apply the Hawking-Hayward quasi-local energy construct to obtain in a rigorous way the turnaround radius of cosmic structures in General Relativity. A splitting of this quasi-local mass into local and cosmological parts describes the interplay between local attraction and cosmological expansion.
Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform
ERIC Educational Resources Information Center
Mette, Ian M.; Scribner, Jay P.
2014-01-01
This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in studentâ€¦
ERIC Educational Resources Information Center
Brown, Angela Antuanette
2012-01-01
The specific purpose of this study was to identify the practices used by leaders of National Blue Ribbon Awarded Schools to successfully turnaround a school or to maintain proficient achievement scores within the school. The school principals answered a four part questionnaire for this study. Part 1 of the questionnaire asked about the extent to…
17 CFR 240.17Ad-2 - Turnaround, processing, and forwarding of items.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 17 Commodity and Securities Exchanges 3 2013-04-01 2013-04-01 false Turnaround, processing, and forwarding of items. 240.17Ad-2 Section 240.17Ad-2 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) GENERAL RULES AND REGULATIONS, SECURITIES EXCHANGE ACT OF 1934 Rules and Regulations Under the Securities Exchange...
A Case Study of Change Strategies Implemented in a Turnaround Elementary School
ERIC Educational Resources Information Center
Colson, Jo Ann
2012-01-01
This case study examined the change strategies in a turnaround school at the elementary level to understand and describe how change occurred and was sustained at this campus. This study examined the factors which contributed to the change in academic success of students, examined beliefs about change that led to the change process, identified the…
ERIC Educational Resources Information Center
Schmidt-Davis, Jon; Bottoms, Gene
2012-01-01
Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apart…
IET control building (TAN620) turnaround area. probably facing north. ventilation ...
IET control building (TAN-620) turnaround area. probably facing north. ventilation ducts near ceiling. sign says, "parking position no. 3, use this space last." poured concrete walls. INEEL negative no. HD-21-5-4 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
State Capacity to Support School Turnaround. NCEE Evaluation Brief. NCEE 2015-4012
ERIC Educational Resources Information Center
Tanenbaum, Courtney; Boyle, Andrea; Graczewski, Cheryl; James-Burdumy, Susanne; Dragoset, Lisa; Hallgren, Kristin
2015-01-01
One objective of the U.S. Department of Education's (ED) School Improvement Grants (SIG) and Race to the Top (RTT) program is to help states enhance their capacity to support the turnaround of low-performing schools. This capacity may be important, given how difficult it is to produce substantial and sustained achievement gains in low-performingâ€¦
Turnaround, Transformational, or Transactional Leadership: An Ethical Dilemma in School Reform
ERIC Educational Resources Information Center
Mette, Ian M.; Scribner, Jay P.
2014-01-01
This case was written for school leaders, specifically building-level principals and central office administrators attempting to implement school turnaround reform efforts. Often, leaders who embark on this type of organizational change work in intense environments that produce high levels of pressure to demonstrate improvement in student…
How to Know when Dramatic Change Is on Track: Leading Indicators of School Turnarounds
ERIC Educational Resources Information Center
Kowal, Julie; Ableidinger, Joe
2011-01-01
In recent years, national policymakers have placed new emphasis on "school turnarounds" as a strategy for rapid, dramatic improvement in chronically failing schools, calling on education leaders to turn around performance in the 5,000 lowest-achieving schools nationwide. This goal may seem daunting, given the dismal success rates of school…
Chronically Low-Performing Schools and Turnaround: Evidence from Three States
ERIC Educational Resources Information Center
Hansen, Michael; Choi, Kilchan
2012-01-01
The criteria for determining the student outcomes that define a school as having "turned around" are not well defined, and the definition of turnaround performance varies across studies. Although current policy initiatives offer guidelines for identifying CLP schools, there is no standard definition or methodology in common usage. This paper…
ERIC Educational Resources Information Center
Schmidt-Davis, Jon; Bottoms, Gene
2012-01-01
Recent studies make one reality clear: While multiple factors can cause a low-performing high school to be in a turnaround situation, every high school that makes dramatic academic improvement has strong, effective school leadership. Turning a school around is no work for novices. It takes a skilled, visionary and proactive principal to pull apartâ€¦
CAD/CAM, Creativity, and Discipline Lead to Turnaround School Success
ERIC Educational Resources Information Center
Gorman, Lynn
2012-01-01
Miami Central High School technology teacher Frank Houghtaling thinks the connection between theory and application is one reason his students perform better on the Florida Comprehensive Assessment Test (FCAT). The impressive turnaround school drew local and national attention last spring when one of Houghtaling's students, Dagoberto Cruz, won…
ERIC Educational Resources Information Center
Herman, Rebecca; Graczewski, Cheryl; James-Burdumy, Susanne; Murray, Matthew; Perez-Johnson, Irma; Tanenbaum, Courtney
2013-01-01
The federal School Improvement Grants (SIG) program, to which $3 billion were allocated under the American Recovery and Reinvestment Act of 2009 (ARRA), supports schools attempting to turn around a history of low performance. School turnaround also is a focus of Race to the Top (RTT), another ARRA-supported initiative, which involved a roughly $4…
NASA Astrophysics Data System (ADS)
Kosnik, Matthew A.; Hua, Quan; Kaufman, Darrell S.; Zawadzki, Atun
2015-03-01
Carbon-14 calibrated amino acid racemization (14C/AAR) data and lead-210 (210Pb) data are used to examine sediment accumulation rates, stratigraphic order, and the extent of time-averaging in sediments collected from the One Tree Reef lagoon (southern Great Barrier Reef, Australia). The top meter of lagoonal sediment preserves a stratigraphically ordered deposit spanning the last 600 yrs. Despite different assumptions, the 210Pb and 14C/AAR chronologies are remarkably similar indicating consistency in sedimentary processes across sediment grain sizes spanning more than three orders of magnitude (0.1-10 mm). Estimates of long-term sediment accumulation rates range from 2.2 to 1.2 mm yr-1. Molluscan time-averaging in the taphonomically active zone is 19 yrs, whereas below the depth of final burial (~15 cm), it is ~110 yrs/5 cm layer. While not a high-resolution paleontological record, this reef lagoon sediment is suitable for paleoecological studies spanning the period of Western colonization and development. This sedimentary deposit, and others like it, should be useful, albeit not ideal, for quantifying anthropogenic impacts on coral reef systems.
NASA Astrophysics Data System (ADS)
Mark, William D.
2015-10-01
The transmission-error frequency spectrum of meshing gear pairs, operating at constant speed and constant loading, is decomposed into harmonics arising from the fundamental period of the gear pair, rotational harmonics of the individual gears of the pair, and tooth-meshing harmonics. In the case of hunting-tooth gear pairs, no rotational harmonics from the individual gears, other than the tooth-meshing harmonics, are shown to occur at the same frequencies. Time-synchronous averages utilizing a number of contiguous revolutions of the gear of interest equal to an integer multiple of the number of teeth on the mating gear is shown to eliminate non-tooth-meshing transmission-error rotational-harmonic contributions from the mating gear, and those from the gear pair, in the case of hunting-tooth gear pairs, and to minimize these contributions in the case of non-hunting-tooth gear pairs. An example computation is shown to illustrate the effectiveness of the suggested time-synchronous-averaging procedure.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Technical Reports Server (NTRS)
Mielke, Amy F.; Seasholtz, Richard G.; Elam, Kristie A.; Panda, Jayanta
2004-01-01
A molecular Rayleigh scattering based flow diagnostic is developed to measure time average velocity, density, temperature, and turbulence intensity in a 25.4-mm diameter nozzle free jet facility. The spectrum of the Rayleigh scattered light is analyzed using a Fabry-Perot interferometer operated in the static imaging mode. The resulting fringe pattern containing spectral information of the scattered light is recorded using a low noise CCD camera. Nonlinear least squares analysis of the fringe pattern using a kinetic theory model of the Rayleigh scattered light provides estimates of density, velocity, temperature, and turbulence intensity of the gas flow. Resulting flow parameter estimates are presented for an axial scan of subsonic flow at Mach 0.95 for comparison with previously acquired pitot tube data, and axial scans of supersonic flow in an underexpanded screeching jet. The issues related to obtaining accurate turbulence intensity measurements using this technique are discussed.
NASA Technical Reports Server (NTRS)
Heyson, Harry H.
1960-01-01
Measurements of the time-averaged induced velocities were obtained for rotor tip speeds as great as 1,100 feet per second (tip Mach number of 0.98) and measurements of the instantaneous induced velocities were obtained for rotor tip speeds as great as 900 feet per second. The results indicate that the small effects on the wake with increasing Mach number are primarily due to the changes in rotor-load distribution resulting from changes in Mach number rather than to compressibility effects on the wake itself. No effect of tip Mach number on the instantaneous velocities was observed. Under conditions for which the blade tip was operated at negative pitch angles, an erratic circulatory flow was observed.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. PMID:26778545
Baimatova, Nassiba; Koziel, Jacek A; Kenessov, Bulat
2015-05-11
A new and simple method for benzene, toluene, ethylbenzene and o-xylene (BTEX) quantification in vehicle exhaust was developed based on diffusion-controlled extraction onto a retracted solid-phase microextraction (SPME) fiber coating. The rationale was to develop a method based on existing and proven SPME technology that is feasible for field adaptation in developing countries. Passive sampling with SPME fiber retracted into the needle extracted nearly two orders of magnitude less mass (n) compared with exposed fiber (outside of needle) and sampling was in a time weighted-averaging (TWA) mode. Both the sampling time (t) and fiber retraction depth (Z) were adjusted to quantify a wider range of Cgas. Extraction and quantification is conducted in a non-equilibrium mode. Effects of Cgas, t, Z and T were tested. In addition, contribution of n extracted by metallic surfaces of needle assembly without SPME coating was studied. Effects of sample storage time on n loss was studied. Retracted TWA-SPME extractions followed the theoretical model. Extracted n of BTEX was proportional to Cgas, t, Dg, T and inversely proportional to Z. Method detection limits were 1.8, 2.7, 2.1 and 5.2 mg m(-3) (0.51, 0.83, 0.66 and 1.62 ppm) for BTEX, respectively. The contribution of extraction onto metallic surfaces was reproducible and influenced by Cgas and t and less so by T and by the Z. The new method was applied to measure BTEX in the exhaust gas of a Ford Crown Victoria 1995 and compared with a whole gas and direct injection method. PMID:25911428
ERIC Educational Resources Information Center
Walker, Michele
2010-01-01
The purpose of this study was to observe, describe, and analyze the aspects of school capacity within an urban turnaround school. School capacity factors included: "Teacher Knowledge, Skills and Disposition"; "Professional Community"; "Program Coherence"; "Technical Resources"; and "Principalâ€¦
WĂłjcik, J.; Kujawska, T.; Nowicki, A.; Lewin, P.A.
2008-01-01
The primary goal of this work was to verify experimentally the applicability of the recently introduced Time-Averaged Wave Envelope (TAWE) method [1] as a tool for fast prediction of four dimensional (4D) pulsed nonlinear pressure fields from arbitrarily shaped acoustic sources in attenuating media. The experiments were performed in water at the fundamental frequency of 2.8 MHz for spherically focused (focal length F = 80 mm) square (20 Ă— 20 mm) and rectangular (10 Ă— 25 mm) sources similar to those used in the design of 1D linear arrays operating with ultrasonic imaging systems. The experimental results obtained with 10-cycle tone bursts at three different excitation levels corresponding to linear, moderately nonlinear and highly nonlinear propagation conditions (0.045, 0.225 and 0.45 MPa on-source pressure amplitude, respectively) were compared with those yielded using the TAWE approach [1]. The comparison of the experimental results and numerical simulations has shown that the TAWE approach is well suited to predict (to within Â± 1 dB) both the spatial-temporal and spatial-spectral pressure variations in the pulsed nonlinear acoustic beams. The obtained results indicated that implementation of the TAWE approach enabled shortening of computation time in comparison with the time needed for prediction of the full 4D pulsed nonlinear acoustic fields using a conventional (Fourier-series) approach [2]. The reduction in computation time depends on several parameters, including the source geometry, dimensions, fundamental resonance frequency, excitation level as well as the strength of the medium nonlinearity. For the non-axisymmetric focused transducers mentioned above and excited by a tone burst corresponding to moderately nonlinear and highly nonlinear conditions the execution time of computations was 3 and 12 hours, respectively, when using a 1.5 GHz clock frequency, 32-bit processor PC laptop with 2 GB RAM memory, only. Such prediction of the full 4D pulsed field is not possible when using conventional, Fourier-series scheme as it would require increasing the RAM memory by at least 2 orders of magnitude. PMID:18474387
Valkov, T.V.; Tan, C.S.
1999-07-01
In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stator with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 2, the focus will be on the interaction of stator with the moving upstream rotor tip and streamwise vortices, the controlling parametric trends, and implications on design.
Valkov, T.V.; Tan, C.S.
1999-07-01
In a two-part paper, key computed results from a set of first-of-a-kind numerical simulations on the unsteady interaction of axial compressor stators with upstream rotor wakes and tip leakage vortices are employed to elucidate their impact on the time-averaged performance of the stator. Detailed interrogation of the computed flowfield showed that for both wakes and tip leakage vortices, the impact of these mechanisms can be described on the same physical basis. Specifically, there are two generic mechanisms with significant influence on performance: reversible recovery of the energy in the wakes/tip vortices (beneficial) and the associated nontransitional boundary layer response (detrimental). In the presence of flow unsteadiness associated with rotor wakes and tip vortices, the efficiency of the stator under consideration is higher than that obtained using a mixed-out steady flow approximation. The effects of tip vortices and wakes are of comparable importance. The impact of stator interaction with upstream wakes and vortices depends on the following parameters: axial spacing, loading, and the frequency of wake fluctuations in the rotor frame. At reduced spacing, this impact becomes significant. The most important aspect of the tip vortex is the relative velocity defect and the associated relative total pressure defect, which is perceived by the stator in the same manner as a wake. In Part 1, the focus will be on the framework of technical approach, and the interaction of stator with the moving upstream rotor wakes.
NASA Astrophysics Data System (ADS)
Moran-Lopez, Tiberius; Schilling, Oleg
2013-11-01
Reshocked Richtmyer-Meshkov turbulent mixing of gases with various Atwood numbers and shock Mach numbers is simulated using a third-order weighted essentially nonoscillatory implementation of a K- É› multicomponent Reynolds-averaged Navier-Stokes model. First, mixing layer widths from simulations with Mach number Ma = 1 . 20 , Atwood number At = 0 . 67 (air/SF6), and different times of reshock are shown to be in very good agreement with the experimental data of Leinov et al. [J. Fluid Mech. 626, 449 (2009)]. Second, widths from simulations with Ma = 1 . 50 and At = +/- 0 . 21 , +/- 0 . 67 and +/- 0 . 87 (corresponding to air/CO2, air/SF6 and H2/air) are compared to the large-eddy simulation data of Lombardini et al. [J. Fluid Mech. 670, 439 (2011)] and discussed. Budgets of the turbulent transport equations are considered to elucidate the mechanisms contributing to turbulent mixing in reshocked Richtmyer-Meshkov instability. Convergence of the mixing layer widths, mean fields, and turbulent fields under grid refinement is also assessed. This work was funded by the U. S. Department of Energy NNSA under the Predictive Science Academic Alliances Program by grant DE-FC52-08NA28616 and performed under the auspices of the DOE by LLNL under Contract DE-AC52-s07NA27344.
Mazzella, Nicolas; Debenest, TimothĂ©e; Delmas, FranĂ§ois
2008-09-01
Polar organic chemical integrative samplers (POCIS) were exposed for 9 days in two different microcosms that contained river waters spiked with deethylterbuthylazine, terbuthylazine and isoproturon. The experiment was performed with natural light and strong turbulence (flow velocities of about 15-50cms(-1)) for reproducing natural conditions. The concentrations were kept relatively constant in the first microcosm (2.6-3.6microgl(-1)) and were variable in the second microcosm (peak concentrations ranged from 15 to 24microgl(-1) during the 3 day pulse phase). The time-weighted average (TWA) concentrations were determined with both POCIS and repetitive grab sampling followed by solid-phase extraction. The results showed a systematic and significant overestimation of the TWA concentrations with the POCIS most probably due to the use of sampling rates derived under low flow scenario. The results showed also that peak concentrations of pollutants are fully integrated by this passive sampler. Even if the POCIS should not provide very accurate concentration estimates without the application of adequate sampling rate values or the use of performance reference compounds, it can be a really useful tool for detecting episodic or short-term pollution events (e.g. increased herbicide concentrations during a flood), which may be missed with classical and low frequency grab sampling. PMID:18649919
NASA Astrophysics Data System (ADS)
Raju, M. S.; Krishnamurthy, L.
1986-07-01
The near-wake region in a ducted bluff-body combustor was investigated by finite-difference computations. The numerical predictions are based upon the time-independent, Reynolds-averaged Navier-Stokes equations and the k-epsilon turbulence model. The steady-state calculations address both nonreacting and reacting flowfields in a novel configuration to more realistically simulate some of the essential features of the primary zone of a gas turbine combustion chamber. This configuration is characterized by turbulent mixing and combustion in the recirculating near-wake region downstream of an axisymmetric bluff body due to two annular air streams--an outer swirl-free flow and an inner swirling flow--and a central fuel jet. The latter contains propane for reacting flows and carbon dioxide for nonreacting flows. In view of the large number of geometrical and flow parameters involved, the reported results are concerned with only a limited parametric examination with the major emphasis being on nonreacting flows. Questions addressed for a particular set of geometric parameters include the effects of variation of mass flow rates in all three streams and the influence of swirl in the middle stream. Reacting computations investigate the influence of swirl on combustion, as well as that of combustion on the flowfield.
NASA Astrophysics Data System (ADS)
Loubet, Benjamin; Carozzi, Marco
2015-04-01
Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28 days. The meteorological dataset of the fluxnet FR-Gri site (Grignon, FR) in 2008 was employed. Several sensor heights were tested, from 0.25 m to 2 m. The multi-source inverse problem was solved based on several sampling and field trial strategies: considering 1 or 2 heights over each field, considering the background concentration as known or unknown, and considering block-repetitions in the field set-up (3 repetitions). The inverse modelling approach demonstrated to be adapted for discriminating large differences in NH3 emissions from small agronomic plots using integrating sensors. The method is sensitive to sensor heights. The uncertainties and systematic biases are evaluated and discussed.
Using corporate finance to engineer an organizational turnaround.
Sussman, Jason H; Dziesinski, Ray R
2002-11-01
Georgia's Southern Regional Medical Center used a proven corporate finance approach to dramatically improve its financial position and integrate its strategic and financial planning. Managers throughout the organization were educated about principles of corporate finance. Reliable cash-flow projections were used to create a multiyear glide path to financial stability. Initiatives were tied to specific time frames and quantifiable financial goals and underwent a standardized review process. PMID:12656032
NASA Astrophysics Data System (ADS)
MorĂˇn-LĂłpez, J. T.; Schilling, O.
2014-05-01
Turbulent mixing generated by shock-driven acceleration of a perturbed interface is simulated using a new multi-component Reynolds-averaged Navier-Stokes (RANS) model closed with a two-equation - model. The model is implemented in a hydrodynamics code using a third-order weighted essentially non-oscillatory finite-difference method for the advection terms and a second-order central difference method for the gradients in the source and diffusion terms. In the present reshocked Richtmyer-Meshkov instability and mixing study, an incident shock with Mach number is generated in air and progresses into a sulfur hexafluoride test section. The time evolution of the predicted mixing layer widths corresponding to six shock tube test section lengths are compared with experimental measurements and three-dimensional multi-mode numerical simulations. The mixing layer widths are also compared with the analytical self-similar power-law solution of the simplified model equations prior to reshock. A set of model coefficients and initial conditions specific to these six experiments is established, for which the widths before and after reshock agree very well with experimental and numerical simulation data. A second set of general coefficients that accommodates a broader range of incident shock Mach numbers, Atwood numbers, and test section lengths is also established by incorporating additional experimental data for , , and with and with and previous RANS modeling. Terms in the budgets of the turbulent kinetic energy and dissipation rate equations are examined to evaluate the relative importance of turbulence production, dissipation and diffusion mechanisms during mixing. Convergence results for the mixing layer widths, mean fields, and turbulent fields under grid refinement are presented for each of the cases.
Angel, Brad M; Simpson, Stuart L; Chariton, Anthony A; Stauber, Jenny L; Jolley, Dianne F
2015-07-01
Intermittent, fluctuating and pulsed contaminant discharges result in organisms receiving highly variable contaminant exposures. Current water quality guidelines are predominantly derived using data from continuous exposure toxicity tests, and most frequently applied by regulators with the assumption that concentrations from a single sampling event will provide a meaningful approach to assessing potential effects. This study investigated the effect of single and multiple (daily) dissolved copper pulses on the marine diatom, Phaeodactylum tricornutum, including measurements of copper uptake and elimination to investigate the toxic mechanism. Copper pulses of between 0.5 and 24h and continuous exposures with equivalent 72-h time-averaged concentrations (TACs) resulted in similar biomass inhibition of P. tricornutum, with continuous exposures often being marginally more toxic. Rates of cell division generally recovered to control levels within 24h of the copper pulse removal. Upon resuspension in clean seawater, the extracellular copper per cell decreased rapidly, whereas the intracellular copper per cell decreased slowly. Negligible loss of copper from the total algal biomass indicated that P. tricornutum did not have an effective mechanism for eliminating copper from cells, rather the intracellular copper decreased as a result of dilution by cellular division as the algal growth rate recovered. The measurement of copper uptake after 72-h exposure and kinetics of elimination thereafter suggest that continuous exposures are marginally more toxic to P. tricornutum than pulsed copper exposures with equivalent TACs because slow internalization and saturation of algal membrane transport sites results in less copper uptake into pulse-exposed cells than continuously-exposed cells coupled with dilution of internalized copper via cellular division in the post-exposure period. In the case of P. tricornutum, the results indicate that water quality guidelines for copper based on continuous exposure will be conservative when applied to short-term discharges. PMID:25911575
NASA Astrophysics Data System (ADS)
Baines, A. G.; Schwartz, J. J.; Cheadle, M. J.; John, B. E.
2005-12-01
Determining the rates of oceanic detachment faulting and how they compare to rates of seafloor spreading is of fundamental importance for constraining how these features form and their relationship to tectonic processes, including magmatic accretion, in slow-spreading environments. We use Pb/U ages of igneous zircon from lower crustal gabbros and sea-surface magnetic data from Atlantis Bank to determine half-spreading rates during detachment faulting. We suggest that these rates, which record crustal accretion and cooling of lower crust in the Antarctic plate, also reflect the time-averaged rate of motion on the detachment fault. During the time Atlantis Bank formed, from 11.2-12.5Ma, both techniques yield a half-spreading rate of 14±1km/Ma. The magnetically determined full-spreading rate has been 14km/Ma since 20Ma and magnetic data from neighboring segments show that this full spreading rate did not increase during the period of detachment faulting. Our data also show that the period of highly asymmetric spreading was confined to the segment containing Atlantis Bank, and so was associated with ridge migration and closure of the non-transform discontinuity to the East. Therefore, detachment faulting at Atlantis Bank occurred during a period of highly asymmetric spreading; a result consistent with sea-surface magnetic data from the FUJI dome at 63° 45'E on the SWIR. Our data require a re-evaluation of existing models that predict structural asymmetry but not asymmetric plate spreading rates during the formation of oceanic core-complexes. The half-spreading rate during detachment faulting suggests that for ~1.3Ma the detachment fault acted as the primary plate boundary and that the axial valley was (within resolution) stationary with respect to the hanging wall (African Plate). If volcanism continued within the axial valley, a thick layer of extrusive material would have formed. This has not been identified. A combination of hydrothermal cooling and strain localization on the detachment fault might lead to limited diking and volcanism within the hanging wall, thus restricting crustal accretion to the lower crust and producing a thick plutonic layer in the Antarctic Plate, a prediction consistent with seismic velocities and crustal thickness observed beneath Atlantis Bank.
Delfino, R J; Zeiger, R S; Seltzer, J M; Street, D H
1998-01-01
Experimental research in humans and animals points to the importance of adverse respiratory effects from short-term particle exposures and to the importance of proinflammatory effects of air pollutants, particularly O(subscript)3. However, particle averaging time has not been subjected to direct scientific evaluation, and there is a lack of epidemiological research examining both this issue and whether modification of air pollutant effects occurs with differences in asthma severity and anti-inflammatory medication use. The present study examined the relationship of adverse asthma symptoms (bothersome or interfered with daily activities or sleep) to O(3) and particles (less than or equal to)10 micrometer (PM10) in a Southern California community in the air inversion zone (1200-2100 ft) with high O(3) and low PM (R = 0.3). A panel of 25 asthmatics 9-17 years of age were followed daily, August through October 1995 (n = 1,759 person-days excluding one subject without symptoms). Exposures included stationary outdoor hourly PM10 (highest 24-hr mean, 54 microgram/m(3), versus median of 1-hr maximums, 56 microgram/m(3) and O(3) (mean of 1-hr maximums, 90 ppb, 5 days (greater than or equal to)120 ppb). Longitudinal regression analyses utilized the generalized estimating equations (GEE) model controlling for autocorrelation, day of week, outdoor fungi, and weather. Asthma symptoms were significantly associated with both outdoor O(3) and PM(10) in single pollutant- and co-regressions, with 1-hr and 8-hr maximum PM(10) having larger effects than the 24-hr mean. Subgroup analyses showed effects of current day PM(10) maximums were strongest in 10 more frequently symptomatic (MS) children: the odds ratios (ORs) for adverse symptoms from 90th percentile increases were 2.24 [95% confidence interval (CI), 1.46-3.46] for 1-hr PM10 (47 microgram/m(3); 1.82 (CI, 1.18-2.81) for 8-hr PM10 (36 microgram/m(3); and 1.50 (CI, 0.80-2.80) for 24-hr PM10 (25 microgram/m(3). Subgroup analyses also showed the effect of current day O(subscript)3 was strongest in 14 less frequently symptomatic (LS) children: the ORs were 2.15 (CI, 1.04-4.44) for 1-hr O(3) (58 ppb) and 1.92 (CI, 0.97-3.80) for 8-hr O(3) (46 ppb). Effects of 24-hr PM10 were seen in both groups, particularly with 5-day moving averages (ORs were 1.95 for MS and 4. 03 for LS; p(less than or equal to)0.05). The largest effects were in 7 LS children not on anti-inflammatory medications [5-day, 8-hr PM10, 9.66 (CI, 2.80-33.21); current day, 1-hr O(3), 4.14 (CI, 1.71-11.85)]. Results suggest that examination of short-term particle excursions, medication use, and symptom severity in longitudinal studies of asthma yields sensitive measures of adverse respiratory effects of air pollution. Images Figure 1 PMID:9799192
Field demonstration of rapid turnaround, multilevel groundwater screening
Tingle, A.R.; Baker, L.; Long, D.D.; Miracle, M.
1994-09-01
A combined technology approach to rapidly characterizing source area and downgradient groundwater associated with a past fuel spill has been field tested. The purpose of this investigation was to determine the presence and extent of fuel-related compounds or indications of their biodegradation in groundwater. The distance from the source area to be investigated was established by calculating the potential extent of a plume based only on groundwater flow velocities. To accomplish this objective, commercially available technologies were combined and used to rapidly assess the source area and downgradient groundwater associated with the fuel discharge. The source of contamination that was investigated overlies glacial sand and gravel outwash deposits. Historical data suggest that from 1955 to 1970 as many as 1 to 6 million pi of aviation gasoline (AVGAS) were god at the study area. Although the remedial investigation (RI) for this study area indicated fuel-related groundwater contamination at the source area, fuel-related contamination was not detected in downgradient monitoring wells. Rapid horizontal groundwater velocities and the 24-year time span from the last reported spill farther suggest that a plume of contaminated groundwater could extend several thousand feet downgradient. The lack of contamination downgradient from the source suggests two possibilities: (1) monitoring wells installed during the RI did not intersect the plume or (2) fuel-related compounds had naturally degraded.
ERIC Educational Resources Information Center
McLester, Susan
2011-01-01
The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government saysâ€¦
ERIC Educational Resources Information Center
McLester, Susan
2011-01-01
The Obama administration has grand hopes for turning around the nation's lowest-performing schools, in part by allocating $3.5 billion for School Improvement Grants. Unfortunately, there simply aren't enough qualified principals to replace those mandated to be fired under two of the four school improvement models that the federal government says…
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 ÎĽm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 ÎĽm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Temperature averaging thermal probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V. (inventors)
1985-01-01
A thermal probe to average temperature fluctuations over a prolonged period was formed with a temperature sensor embedded inside a solid object of a thermally conducting material. The solid object is held in a position equidistantly spaced apart from the interior surfaces of a closed housing by a mount made of a thermally insulating material. The housing is sealed to trap a vacuum or mass of air inside and thereby prevent transfer of heat directly between the environment outside of the housing and the solid object. Electrical leads couple the temperature sensor with a connector on the outside of the housing. Other solid objects of different sizes and materials may be substituted for the cylindrically-shaped object to vary the time constant of the probe.
ERIC Educational Resources Information Center
Mitchem, John
1989-01-01
Examples used to illustrate Simpson's paradox for secondary students include probabilities, university admissions, batting averages, student-faculty ratios, and average and expected class sizes. Each result is explained. (DC)
ERIC Educational Resources Information Center
Baeza, Marco A.
2010-01-01
This study analyzed skills, strategies, and theories that new middle school principals used to be successful during their transition period (the first 90 days) in turnaround schools. Based on research on transitions, three research questions guided the study: 1. Do middle school principals in a turnaround school situation find the transition…
ERIC Educational Resources Information Center
Calkins, Andrew; Guenther, William; Belfiore, Grace; Lash, Dave
2007-01-01
The turnaround recommendations and framework in "The Turnaround Challenge" grew out of both new research and synthesis of extensive existing research, as carried out by Mass Insight Education & Research Institute and its partners since September 2005. If the main report is the tip of the proverbial iceberg, this supplement represents at least part…
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Agnese, Carmelo; Baiamonte, Giorgio; Canone, Davide; Previati, Maurizio; Cat Berro, Daniele; Mercalli, Luca
2015-04-01
Modeling of rainfall statistical structure represents an important research area in hydrology, meteorology, atmospheric physics and climatology, because of the several theoretical and practical implications. The statistical inference of the alternation of wet periods (WP) and dry periods (DP) in daily rainfall records can be achieved through the modelling of inter-arrival time-series (IT), defined as the succession of times elapsed from a rainy day and the one immediately preceding it. It has been shown previously that the statistical structure of IT can be well described by the 3-parameter Lerch distribution (Lch). In this work, Lch was successfully applied to IT data belonging to a sub-alpine area (Piemonte and Valle d'Aosta, NW Italy); furthermore the same statistical procedure was applied to daily rainfall records to ITs associated. The analysis has been carried out for 26 daily rainfall long-series (? 90 yr of observations). The main objective of this work was to detect temporal trends of some features describing the statistical structure of both inter-arrival time-series (IT) and associated rainfall depth (H). Each time-series was divided on subsets of five years long and for each of them the estimation of the Lch parameter was performed, so to extend the trend analysis to some high quantiles.
ERIC Educational Resources Information Center
Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad
2014-01-01
Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies of…
ERIC Educational Resources Information Center
Jones, Stephanie; Clarke, Lane; Enriquez, Grace
2009-01-01
This book demonstrates a five-part framework for teachers, reading specialists, and literacy coaches who want to help their least engaged students become powerful readers. Merging theory and practice, the guide offers successful strategies to reach your "struggling" learners. The authors show how you can "turn-around" your instructional practice,…
ERIC Educational Resources Information Center
Reyes, Augustina; Garcia, Andres
2014-01-01
In an era of school reform and high stakes accountability, the major challenge in education is to turnaround the nation's lowest-performing schools. National policy provides abundant resources with often restrictive prerequisites. Research indicates that quality principals and teachers are the most important factors for turning around the…
ERIC Educational Resources Information Center
Carlson, Robert V.
This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failedâ€¦
ERIC Educational Resources Information Center
Reyes, Augustina; Garcia, Andres
2014-01-01
In an era of school reform and high stakes accountability, the major challenge in education is to turnaround the nation's lowest-performing schools. National policy provides abundant resources with often restrictive prerequisites. Research indicates that quality principals and teachers are the most important factors for turning around theâ€¦
ERIC Educational Resources Information Center
Marsh, Julie A.; Strunk, Katharine O.; Bush, Susan
2013-01-01
Purpose: Despite the popularity of school "turnaround" and "portfolio district" management as solutions to low performance, there has been limited research on these strategies. The purpose of this paper is to address this gap by exploring the strategic case of Los Angeles Unified School District's Public School Choice…
ERIC Educational Resources Information Center
Carlson, Robert V.
This paper presents a case study of a rural community exhibiting a dramatic turnaround in community support for a new school bond issue. Demographic change was partly responsible for the change in community attitudes, with two waves of immigration altering the long-term conservative orientation of this community. After a series of failed…
ERIC Educational Resources Information Center
Ylimaki, Rose M.; Brunderman, Lynnette; Bennett, Jeffrey V.; Dugan, Thad
2014-01-01
Today's accountability policies and changing demographics have created conditions in which leaders must rapidly build school capacity and improve outcomes in culturally diverse schools. This article presents findings from a mixed-methods evaluation of an Arizona Turnaround Leadership Development Project. The project drew on studies ofâ€¦
ERIC Educational Resources Information Center
Jacobson, Stephen L.; Szczesek, Jill
2013-01-01
This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System andâ€¦
ERIC Educational Resources Information Center
Jacobson, Stephen L.; Szczesek, Jill
2013-01-01
This study investigates the economic impact of a "turnaround" school on real property values in its surrounding community as related to the argument introduced by Tiebout in 1956 correlating local public goods, in this case school success, to housing-location decision making. Using single-family home sales found on the Multiple Listing System and…
States' Average College Tuition.
ERIC Educational Resources Information Center
Eglin, Joseph J., Jr.; And Others
This report presents statistical data on trends in tuition costs from 1980-81 through 1995-96. The average tuition for in-state undergraduate students of 4-year public colleges and universities for academic year 1995-96 was approximately 8.9 percent of median household income. This figure was obtained by dividing the students' average annualâ€¦
Ukawa, Shigekazu; Tamakoshi, Akiko; Yatsuya, Hiroshi; Yamagishi, Kazumasa; Ando, Masahiko; Iso, Hiroyasu
2015-01-01
Background Sedentary behavior is associated with cardiovascular disease, diabetes mellitus, and cancer morbidity, and watching television (TV) is an important sedentary behavior. The aim of this study is to clarify the association between TV viewing time and chronic obstructive pulmonary disease (COPD)-related mortality in Japanese adults. Methods Using the Cox proportional hazard model, we assessed COPD-related mortality by TV viewing time in a national cohort of 33 414 men and 43 274 women without cancer, stroke, myocardial infarction, or tuberculosis at baseline (1988–1990). Results The median follow-up was 19.4 years; 244 men and 34 women died of COPD. Men watching ?4 hours/day of TV were more likely to die of COPD than those watching <2 hours/day (hazard ratio 1.63; 95% confidence interval, 1.04–2.55), independent of major confounders. No association was found in women. Conclusions Avoiding a sedentary lifestyle, particularly prolonged TV viewing, may help in preventing death from COPD among men. PMID:25947581
NASA Astrophysics Data System (ADS)
Kim, Seung-Kyu; Park, Jong-Eun
2014-06-01
Despite remarkable achievements with r some chemicals, a field-measurement technique has not been advanced for volatile hydrophobic organic chemicals (HOCs) that are the subjects of international concern. This study assesses the applicability of passive air sampling (PAS) by comparing PUF-PAS and its modified SIP-PAS which was made by impregnating XAD-4 powder into PUF, overviewing the principles of PAS, screening sensitive parameters, and determining the uncertainty range of PAS-derived concentration. The PAS air sampling rate determined in this study, corrected by a co-deployed low-volume active air sampler (LAS) for neutral PFCs as model chemicals, was 1.2 m3 day-1. Our assessment shows that the improved sorption capacity in a SIP lengthens PAS deployment duration by expanding the linear uptake range and then enlarges the effective air sampling volume and detection frequency of chemicals at trace level. Consequently, volatile chemicals can be collected during sufficiently long times without reaching equilibrium when using SIP, while this is not possible for PUF. The most sensitive parameter to influence PAS-derived CA was an air-side mass transfer coefficient (kA), implying the necessity of spiking depuration chemicals (DCs) because this parameter is strongly related with meteorological conditions. Uncertainty in partition coefficients (KPSM-A or KOA) influences PAS-derived CA to a greater extent with regard to lower KPSM-A chemicals. Also, the PAS-derived CA has an uncertainty range of a half level to a 3-fold higher level of the calculated one. This work is expected to establish solid grounds for the improvement of field measurement technique of HOCs.
Designing Digital Control Systems With Averaged Measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1990-01-01
Rational criteria represent improvement over "cut-and-try" approach. Recent development in theory of control systems yields improvements in mathematical modeling and design of digital feedback controllers using time-averaged measurements. By using one of new formulations for systems with time-averaged measurements, designer takes averaging effect into account when modeling plant, eliminating need to iterate design and simulation phases.
NASA Astrophysics Data System (ADS)
Rung-Arunwan, Tawat; Siripunvaraporn, Weerachai; Utada, Hisashi
2016-04-01
Through a large number of magnetotelluric (MT) observations conducted in a study area, one can obtain regional one-dimensional (1-D) features of the subsurface electrical conductivity structure simply by taking the geometric average of determinant invariants of observed impedances. This method was proposed by Berdichevsky and coworkers, which is based on the expectation that distortion effects due to near-surface electrical heterogeneities will be statistically smoothed out. A good estimation of a regional mean 1-D model is useful, especially in recent years, to be used as a priori (or a starting) model in 3-D inversion. However, the original theory was derived before the establishment of the present knowledge on galvanic distortion. This paper, therefore, reexamines the meaning of the Berdichevsky average by using the conventional formulation of galvanic distortion. A simple derivation shows that the determinant invariant of distorted impedance and its Berdichevsky average is always downward biased by the distortion parameters of shear and splitting. This means that the regional mean 1-D model obtained from the Berdichevsky average tends to be more conductive. As an alternative rotational invariant, the sum of the squared elements (ssq) invariant is found to be less affected by bias from distortion parameters; thus, we conclude that its geometric average would be more suitable for estimating the regional structure. We find that the combination of determinant and ssq invariants provides parameters useful in dealing with a set of distorted MT impedances.
Vibration analysis by time-average holography
NASA Technical Reports Server (NTRS)
Aprahamian, R.; Evensen, D. A.
1971-01-01
Method photographs vibration modes as high as 100 kHz. Technique also measures vibration decay and damping characteristics. Compensation techniques for structural size limitations and for background noise excitations are discussed.
Estimating Average Domain Scores.
ERIC Educational Resources Information Center
Pommerich, Mary; Nicewander, W. Alan
A simulation study was performed to determine whether a group's average percent correct in a content domain could be accurately estimated for groups taking a single test form and not the entire domain of items. Six Item Response Theory (IRT) -based domain score estimation methods were evaluated, under conditions of few items per content area per…
ERIC Educational Resources Information Center
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Residual life assessment of the SSME/ATD HPOTP turnaround duct (TAD)
NASA Technical Reports Server (NTRS)
Gross, R. Steven
1996-01-01
This paper is concerned with the prediction of the low cycle thermal fatigue behavior of a component in a developmental (ATD) high pressure liquid oxygen turbopump (HPOTP) for the Space Shuttle Main Engine (SSME). This component is called the Turnaround Duct (TAD). The TAD is a complex single piece casting of MAR-M-247 material. Its function is to turn the hot turbine exhaust gas (1200 F hydrogen rich gas steam) such that it can exhaust radially out of the turbopump. In very simple terms, the TAD consists of two rings connected axially by 22 hollow airfoil shaped struts with the turning vanes placed at the top, middle, and bottom of each strut. The TAD is attached to the other components of the pump via bolts passing through 14 of the 22 struts. Of the remaining 8 struts, four are equally spaced (90 deg interval) and containing a cooling tube through which liquid hydrogen passes on its way to cool the shaft bearing assemblies. The remaining 4 struts are empty. One of the pump units in the certification test series was destructively examined after 22 test firings. Substantial axial cracking was found in two of the struts which contain cooling tubes. None of the other 20 struts showed any sign of internal cracking. This unusual low cycle thermal fatigue behavior within the two cooling tube struts is the focus of this study.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
A Bound Violation on the Galaxy Group Scale: The Turn-around Radius of NGC 5353/4
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Kim, Suk; Rey, Soo-Chang
2015-12-01
The first observational evidence for the violation of the maximum turn-around radius on the galaxy group scale is presented. The NGC 5353/4 group is chosen as an ideal target for our investigation of the bound-violation because of its proximity, low-density environment, optimal mass scale, and the existence of a nearby thin straight filament. Using the observational data on the line-of-sight velocities and three-dimensional distances of the filament galaxies located in the bound zone of the NGC 5353/4 group, we construct their radial velocity profile as a function of separation distance from the group center and then compare it to the analytic formula obtained empirically by Falco et al. to find the best-fit value of an adjustable parameter with the help of the maximum likelihood method. The turn-around radius of NGC 5353/4 is determined to be the separation distance where the adjusted analytic formula for the radial velocity profile yields zero. The estimated turn-around radius of NGC 5353/4 turned out to substantially exceed the upper limit predicted by the spherical model based on the ?CDM cosmology. Even when the restrictive condition of spherical symmetry is released, the estimated value is found to be only marginally consistent with the ?CDM expectation.
On radar time and the twin ``paradox''
NASA Astrophysics Data System (ADS)
Dolby, Carl E.; Gull, Stephen F.
2001-12-01
In this paper we apply the concept of radar time (popularized by Bondi in his work on k calculus) to the well-known relativistic twin "paradox." Radar time is used to define hypersurfaces of simultaneity for a class of traveling twins, from the "immediate turn-around" case, through the "gradual turn-around" case, to the "uniformly accelerating" case. We show that this definition of simultaneity is independent of choice of coordinates, and assigns a unique time to any event (with which the traveling twin can send and receive signals), resolving some common misconceptions.
Srivatsav, Siddhart; Webster, Jacquelyn; Webster, Michael
2015-01-01
The average color in a scene is a potentially important cue to the illuminant and thus for color constancy, but it remains unknown how well and in what ways observers can estimate the mean chromaticity. We examined this by measuring the variability in "achromatic" settings for stimuli composed of different distributions of colors. The displays consisted of a 15 by 15 palette of colors shown on a gray background on a monitor, with each chip subtending 0.5 deg. Individual colors were randomly sampled from varying contrast ranges along the luminance, S and LM cardinal axes. Observers were instructed to adjust the chromaticity of the palette so that the mean was gray, with variability estimated from 20 or more repeated settings. This variability increased progressively with increasing contrast in the distributions, with large increases for chromatic contrast but also weak effects for added luminance contrast. Signals along the cardinal axes are relatively independent in many detection and discrimination tasks, but showed strong interference in the white estimates. Specifically, adding S contrast increased variability in the white settings along both the S and LM axes, and vice versa. This "cross-masking" and the effects of chromatic variance in general may occur because observers cannot explicitly perceive or represent the mean of a set of qualitatively different hues (e.g. that red and green hues average to gray), and thus may infer the mean only indirectly (e.g. from the relative saturation of different hues). Meeting abstract presented at VSS 2015. PMID:26326088
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
Chen, Guang-Hong; Li, Yinsheng
2015-01-01
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240? short scan angular span was divided into four consecutive 60? angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200?, three 66? angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60? angular subsectors. PMID:26233197
Americans' Average Radiation Exposure
NA
2000-08-11
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical engineering applications.
Ensemble averaging of acoustic data
NASA Technical Reports Server (NTRS)
Stefanski, P. K.
1982-01-01
A computer program called Ensemble Averaging of Acoustic Data is documented. The program samples analog data, analyzes the data, and displays them in the time and frequency domains. Hard copies of the displays are the program's output. The documentation includes a description of the program and detailed user instructions for the program. This software was developed for use on the Ames 40- by 80-Foot Wind Tunnel's Dynamic Analysis System consisting of a PDP-11/45 computer, two RK05 disk drives, a tektronix 611 keyboard/display terminal, and FPE-4 Fourier Processing Element, and an analog-to-digital converter.
Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average
ERIC Educational Resources Information Center
DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.
2007-01-01
Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…
Rosowski, John J.; Cheng, Jeffrey Tao; Ravicz, Michael E.; Hulli, Nesim; Hernandez-Montes, Maria; Harrington, Ellery; Furlong, Cosme
2009-01-01
Time-averaged holograms describing the sound-induced motion of the tympanic membrane (TM) in cadaveric preparations from three mammalian species and one live ear were measured using opto-electronic holography. This technique allows rapid measurements of the magnitude of motion of the tympanic membrane surface at frequencies as high as 25 kHz. The holograms measured in response to low and middle-frequency sound stimuli are similar to previously reported time-averaged holograms. However, at higher frequencies (f > 4 kHz), our holograms reveal unique TM surface displacement patterns that consist of highly-ordered arrangements of multiple local displacement magnitude maxima, each of which is surrounded by nodal areas of low displacement magnitude. These patterns are similar to modal patterns (two-dimensional standing waves) produced by either the interaction of surface waves traveling in multiple directions or the uniform stimulation of modes of motion that are determined by the structural properties and boundary conditions of the TM. From the ratio of the displacement magnitude peaks to nodal valleys in these apparent surface waves, we estimate a Standing Wave Ratio of at least 4 that is consistent with energy reflection coefficients at the TM boundaries of at least 0.35. It is also consistent with small losses within the uniformly stimulated modal surface waves. We also estimate possible TM surface wave speeds that vary with frequency and species from 20 to 65 m/s, consistent with other estimates in the literature. The presence of standing wave or modal phenomena has previously been intuited from measurements of TM function, but is ignored in some models of tympanic membrane function. Whether these standing waves result either from the interactions of multiple surface waves that travel along the membrane, or by uniformly excited modal displacement patterns of the entire TM surface is still to be determined. PMID:19328841
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
The objective of this research is to develop turbulence models to predict the flow and heat transfer fields dominated by the curvature effect such as those encountered in turbine cascades and turn-around ducts. A Navier-Stokes code has been developed using an explicit Runge-Kutta method with a two layer k-epsilon/ARSM (Algebraic Reynolds Stress Model), Chien's Low Reynolds Number (LRN) k-epsilon model and Coakley's LRN q-omega model. The near wall pressure strain correlation term was included in the ARSM. The formulation is applied to Favre-averaged N-S equations and no thin-layer approximations are made in either the mean flow or turbulence transport equations. Anisotropic scaling of artificial dissipation terms was used. Locally variable timestep was also used to improve convergence. Detailed comparisons were made between computations and data measured in a turbine cascade by Arts et al. at Von Karman Institute. The surface pressure distributions and wake profiles were predicted well by all the models. The blade heat transfer is predicted well by k-epsilon/ARSM model, as well as the k-epsilon model. It's found that the onset of boundary layer transition on both surfaces is highly dependent upon the level of local freestream turbulence intensity, which is strongly influenced by the streamline curvature. Detailed computation of the flow in the turn around duct has been carried out and validated against the data by Monson as well as Sandborn. The computed results at various streamwise locations both on the concave and convex sides are compared with flow and turbulence data including the separation zone on the inner well. The k-epsilon/ARSM model yielded relatively better results than the two-equation turbulence models. A detailed assessment of the turbulence models has been made with regard to their applicability to curved flows.
Averaging Robertson-Walker cosmologies
Brown, Iain A.; Robbers, Georg; Behrend, Juliane E-mail: G.Robbers@thphys.uni-heidelberg.de
2009-04-15
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the {Lambda}CDM concordance model, the backreaction is of the order of {Omega}{sub eff}{sup 0} Almost-Equal-To 4 Multiplication-Sign 10{sup -6}, with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10{sup -8} and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w{sub eff} < -1/3 can be found for strongly phantom models.
Average-cost based robust structural control
NASA Technical Reports Server (NTRS)
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
NASA Astrophysics Data System (ADS)
Hebner, G. A.; Riley, M. E.; Greenberg, K. E.
2002-10-01
A method for measuring the interactions of dust particles within a two-dimensional (2D) layer in a plasma is presented, along with the use of dust as a probe for determining plasma presheath electric fields. Particle interactions were directly determined from the lateral compression of two-dimensional plasma dust crystals confined in a parabolic potential well. The shape of the parabolic potential well was confirmed by observing trajectories of single particles falling within the well. Particle trajectories were in good agreement with values calculated using gas temperatures derived from laser-induced-fluorescence measurements of the argon metastable velocity distributions and assuming diffusive gas scattering. Measured particle separations combined with an equation of state for the crystal were used to derive values for the plasma screening length and the charge on the particles. Screening lengths and charges were measured for a range of plasma power, pressure, and particle diameter (mass). Analysis of the particle heights and charge were used to derive the time-averaged electric fields at the edge of the rf driven plasma sheath. Measured electric fields were between 5 and 22 V/cm. For smaller particle diameters, the ion wind force was comparable to the force of gravity. These measurements showed that the particles are confined to the bulk-plasma side of the classical Bohm point.
ERIC Educational Resources Information Center
Abbott, Laura Lynn Tanner
2014-01-01
The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIA…
ERIC Educational Resources Information Center
Abbott, Laura Lynn Tanner
2014-01-01
The purpose of this quantitative non-experimental predictive study was to determine if CIA alignment factors and related district support systems are associated with student achievement to enable the turnaround of schools in crisis. This study aimed to utilize the District Snapshot Tool to determine if the district systems that support CIAâ€¦
Light propagation in the averaged universe
Bagheri, Samae; Schwarz, Dominik J. E-mail: dschwarz@physik.uni-bielefeld.de
2014-10-01
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of the null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris E-mail: julien.larena@gmail.com
2011-03-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H{sub 0}, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions.
Spacetime averaged null energy condition
Urban, Douglas; Olum, Ken D.
2010-06-15
The averaged null energy condition has known violations for quantum fields in curved space, even when one considers only achronal geodesics. Many such examples involve rapid variation in the stress-energy tensor in the vicinity of the geodesic under consideration, giving rise to the possibility that averaging in additional dimensions would yield a principle universally obeyed by quantum fields. However, after discussing various procedures for additional averaging, including integrating over all dimensions of the manifold, we give here a class of examples that violate any such averaged condition.
Arithmetic averaging: A versatile technique for smoothing and trend removal
Clark, E.L.
1993-12-31
Arithmetic averaging is simple, stable, and can be very effective in attenuating the undesirable components in a complex signal, thereby providing smoothing or trend removal. An arithmetic average is easy to calculate. However, the resulting modifications to the data, in both the time and frequency domains, are not well understood by many experimentalists. This paper discusses the following aspects of averaging: (1) types of averages -- simple, cumulative, and moving; and (2) time and frequency domain effects of the averaging process.
Searching for the Beginning of the Ozone Turnaround Using a 22-Year Merged Satellite Data Set
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Meeson, Blanche W. (Technical Monitor)
2001-01-01
We have used the data from six satellite instruments that measure the total column amount of ozone to construct a consistent merged data set extending from late 1978 into 2000. The keys to constructing a merged data set are to minimize potential drift of individual instruments and to accurately establish instrument-to-instrument offsets. We have used the short-wavelength D-pair measurements (306nm-313nm) of the SBUV and SBUV/2 instruments near the equator to establish a relatively drift-free record for these instruments. We have then used their overlap with the Nimbus 7 and EP TOMS instruments to establish the relative calibration of the various instruments. We have evaluated the drift uncertainty in our merged ozone data (MOD) set by examining both the individual instrument drift uncertainty and the uncertainty in establishing the instrument- to-instrument differences. We conclude that the instrumental drift uncertainty over the 22-year data record is 0.9 %/decade (2-sigma). We have compared our MOD record with 37 ground stations that have a continuous record over that time period. We have a mean drift with respect to the stations of +0.3 %/decade which is within 1-sigma of our uncertainty estimate. Using the satellite record as a transfer standard, we can estimate the capability of the ground instruments to establish satellite calibration. Adding the statistical variability of the station drifts with respect to the satellite to an estimate of the overall drift uncertainty of the world standard instrument, we conclude that the stations should be able to be used to establish the drift of the satellite data record to within and uncertainty of 0.6 %/decade (2-sigma). Adding to this an uncertainty due to the-incomplete global coverage of the stations, we conclude that the station data should be able to establish the global trend with an uncertainty of about 0.7 %/decade, slightly better than for the satellite record. We conclude that merging the two records together gives only a slight improvement in the uncertainty. Keeping them separate gives the greater confidence of two independent measures of the ozone trend and potential recovery. We fit the trend in our MOD record through May of 1991 and then extrapolated forward to see if the data at the end of the record was above the statistical model as a measure of ozone recovery as was done in the last WMO/UNEP assessment report. Because our data set drifts with respect to the ground-stations through May of 1991, we calculated a smaller global trend (-1.1 %/decade) than in the WMO/UNEP report. Our data in 1998 and 1999 was, on average 2 DU above the extrapolated statistical model with a 2-sigma uncertainty of 6 DU. For the combined mid-latitudes of the northern and southern hemispheres, the data was 5 DU above the extrapolated statistical model with a 2-sigma uncertainty of 10 DU. These may be signs of recovery, but they are still statistically insignificant.
High average power pockels cell
Daly, Thomas P.
1991-01-01
A high average power pockels cell is disclosed which reduces the effect of thermally induced strains in high average power laser technology. The pockels cell includes an elongated, substantially rectangular crystalline structure formed from a KDP-type material to eliminate shear strains. The X- and Y-axes are oriented substantially perpendicular to the edges of the crystal cross-section and to the C-axis direction of propagation to eliminate shear strains.
Averaging in spherically symmetric cosmology
Coley, A. A.; Pelavas, N.
2007-02-15
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .
Instrument to average 100 data sets
NASA Technical Reports Server (NTRS)
Tuma, G. B.; Birchenough, A. G.; Rice, W. J.
1977-01-01
An instrumentation system is currently under development which will measure many of the important parameters associated with the operation of an internal combustion engine. Some of these parameters include mass-fraction burn rate, ignition energy, and the indicated mean effective pressure. One of the characteristics of an internal combustion engine is the cycle-to-cycle variation of these parameters. A curve-averaging instrument has been produced which will generate the average curve, over 100 cycles, of any engine parameter. the average curve is described by 2048 discrete points which are displayed on an oscilloscope screen to facilitate recording and is available in real time. Input can be any parameter which is expressed as a + or - 10-volt signal. Operation of the curve-averaging instrument is defined between 100 and 6000 rpm. Provisions have also been made for averaging as many as four parameters simultaneously, with a subsequent decrease in resolution. This provides the means to correlate and perhaps interrelate the phenomena occurring in an internal combustion engine. This instrument has been used successfully on a 1975 Chevrolet V8 engine, and on a Continental 6-cylinder aircraft engine. While this instrument was designed for use on an internal combustion engine, with some modification it can be used to average any cyclically varying waveform.
Vibrational averages along thermal lines
NASA Astrophysics Data System (ADS)
Monserrat, Bartomeu
2016-01-01
A method is proposed for the calculation of vibrational quantum and thermal expectation values of physical properties from first principles. Thermal lines are introduced: these are lines in configuration space parametrized by temperature, such that the value of any physical property along them is approximately equal to the vibrational average of that property. The number of sampling points needed to explore the vibrational phase space is reduced by up to an order of magnitude when the full vibrational density is replaced by thermal lines. Calculations of the vibrational averages of several properties and systems are reported, namely, the internal energy and the electronic band gap of diamond and silicon, and the chemical shielding tensor of L-alanine. Thermal lines pave the way for complex calculations of vibrational averages, including large systems and methods beyond semilocal density functional theory.
Evaluations of average level spacings
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of /sup 168/Er data. 19 figures, 2 tables.
IgG/anti-IgG immunoassay based on a turn-around point long period grating
NASA Astrophysics Data System (ADS)
Chiavaioli, F.; Biswas, P.; Trono, C.; Giannetti, A.; Tombelli, S.; Bandyopadhyay, S.; Basumallick, N.; Dasgupta, K.; Baldini, F.
2014-02-01
Long period fiber gratings (LPFGs) have been proposed as label-free optical biosensor for a few years. Refractive index changes, which modify the fiber transmission spectrum, are still used for evaluating a biochemical interaction that occurs along the grating region. A turn-around point (TAP) LPFG was manufactured for enhancing the refractive index sensitivity of these devices. Considering the simplicity and the fast process with respect to the silanization procedure, the functionalization of the fiber was carried out by Eudragit L100 copolymer. An IgG/anti-IgG immunoassay was implemented for studying the antigen/antibody interaction. A limit of detection lower than 100 ?g L-1 was achieved. Based on the same model assay, we compared the resonance wavelength shifts during the injection of 10 mg L-1 anti-IgG antigen between the TAP LPFG and a standard non-TAP one, in which the coupling occurs with a lower order cladding mode, as performance improvement of the LPFG-based biosensors.
Temperature-Averaging Thermal Probe
NASA Technical Reports Server (NTRS)
Kalil, L. F.; Reinhardt, V.
1984-01-01
Temperature-averaging thermal probe measures long-term temperature fluctuations in fluid environment. Consists of temperature probe embedded inside thermally massive material. Probe measurements used to estimate powerplant heating and cooling loads, map temperature profiles, and calibrate more-sensitive temperature probes.
Kuss Middle School: Expanding Time to Accelerate School Improvement
ERIC Educational Resources Information Center
Massachusetts 2020, 2012
2012-01-01
In 2004, Kuss Middle School became the first school declared "Chronically Underperforming" by the state of Massachusetts. But by 2010, Kuss had transformed itself into a model for schools around the country seeking a comprehensive turnaround strategy. Kuss is using increased learning time as the primary catalyst to accelerate learning, deepenâ€¦
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,â€¦
ERIC Educational Resources Information Center
Conrad, Jill K.
2013-01-01
This dissertation examines the levels of and relationships between technical leadership, adaptive leadership, and defined autonomy among Denver school leaders along with their combined effects on school growth gains over time. Thirty principals provided complete responses to an online survey that included existing scales for technical leadership,…
Exploiting scale dependence in cosmological averaging
Mattsson, Teppo; Ronkainen, Maria E-mail: maria.ronkainen@helsinki.fi
2008-02-15
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre-Tolman-Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z{approx}2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion.
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.; Petravic, Janka
2008-10-01
Homogeneous shear flows (with constant strainrate dvx/dy ) are generated with the Doll’s and Sllod algorithms and compared to corresponding inhomogeneous boundary-driven flows. We use one-, two-, and three-dimensional smooth-particle weight functions for computing instantaneous spatial averages. The nonlinear normal-stress differences are small, but significant, in both two and three space dimensions. In homogeneous systems the sign and magnitude of the shearplane stress difference, Pxx-Pyy , depend on both the thermostat type and the chosen shearflow algorithm. The Doll’s and Sllod algorithms predict opposite signs for this normal-stress difference, with the Sllod approach definitely wrong, but somewhat closer to the (boundary-driven) truth. Neither of the homogeneous shear algorithms predicts the correct ordering of the kinetic temperatures: Txx>Tzz>Tyy .
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-04-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus is more directly applicable to our observations. In contrast to previous studies, the averaging is exact, non-perturbative, and includes all non-linear effects. The inhomogeneous universes are represented by Swiss-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein-de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Voids aligned along a certain direction give rise to a distance modulus correction which increases with redshift and is caused by cumulative gravitational lensing. That correction is present even for small voids and depends on their density contrast, not on their radius. Averaging over all directions destroys the cumulative lensing correction even in a non-randomized simple cubic lattice of voids. At low redshifts, the average distance modulus correction does not vanish due to the peculiar velocities, despite the photon flux conservation argument. A formula for the maximal possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (a)have approximately constant densities in their interior and walls; and (b)are not in a deep nonlinear regime. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Computation of flows in a turn-around duct and a turbine cascade using advanced turbulence models
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.; Luo, J.
1993-01-01
Numerical investigation has been carried out to evaluate the capability of the Algebraic Reynolds Stress Model (ARSM) and the Nonlinear Stress Model (NLSM) to predict strongly curved turbulent flow in a turn-around duct (TAD). The ARSM includes the near-wall damping term of pressure-strain correlation phi(sub ij,w), which enables accurate prediction of individual Reynolds stress components in wall flows. The TAD mean flow quantities are reasonably well predicted by various turbulence models. The ARSM yields better predictions for both the mean flow and the turbulence quantities than the NLSM and the k-epsilon (k = turbulent kinetic energy, epsilon = dissipation rate of k) model. The NLSM also shows slight improvement over the k-epsilon model. However, all the models fail to capture the recovery of the flow from strong curvature effects. The formulation for phi(sub ij,w) appears to be incorrect near the concave surface. The hybrid k-epsilon/ARSM, Chien's k-epsilon model, and Coakley's q-omega (q = the square root of k, omega = epsilon/k) model have also been employed to compute the aerodynamics and heat transfer of a transonic turbine cascade. The surface pressure distributions and the wake profiles are predicted well by all the models. The k-epsilon model and the k-epsilon/ARSM model provide better predictions of heat transfer than the q-omega model. The k-epsilon/ARSM solutions show significant differences in the predicted skin friction coefficients, heat transfer rates and the cascade performance parameters, as compared to the k-epsilon model. The k-epsilon/ARSM model appears to capture, qualitatively, the anisotropy associated with by-pass transition.
Herrinton, Lisa J; Liu, Liyan; Altschuler, Andrea; Dell, Richard; Rabrenovich, Violeta; Compton-Phillips, Amy L
2015-01-01
Context: Disease registries enable priority setting and batching of clinical tasks, such as reaching out to patients who have missed a routine laboratory test. Building disease registries requires collaboration among professionals in medicine, population science, and information technology. Specialty care addresses many complex, uncommon conditions, and these conditions are diverse. The cost to build and maintain traditional registries for many diverse, complex, low-frequency conditions is prohibitive. Objective: To develop and to test the Specialty Miniregistries platform, a collaborative interface designed to streamline the medical specialist’s contributions to the science and management of population health. Design: We used accessible technology to develop a platform that would generate miniregistries (small, routinely updated datasets) for surveillance, to identify patients who were missing expected utilization, and to influence clinicians and others to change practices to improve care. The platform was composed of staff, technology, and structured collaborations, organized into a workflow. The platform was tested in five medical specialty departments. Main Outcome Measure: Proof of concept. Results: The platform enabled medical specialists to rapidly and effectively communicate clinical questions, knowledge of disease, clinical workflows, and improvement opportunities. Their knowledge was used to build and to deploy the miniregistries. Each miniregistry required 1 to 2 hours of collaboration by a medical specialist. Turnaround was 1 to 14 days. Conclusions: The Specialty Miniregistries platform is useful for low-volume questions that often occur in specialty care, and it requires low levels of investment. The efficient organization of information workers to support accountable care is an emerging question. PMID:25785640
Average entanglement for Markovian quantum trajectories
Vogelsberger, S.; Spehner, D.
2010-11-15
We study the evolution of the entanglement of noninteracting qubits coupled to reservoirs under monitoring of the reservoirs by means of continuous measurements. We calculate the average of the concurrence of the qubits wave function over all quantum trajectories. For two qubits coupled to independent baths subjected to local measurements, this average decays exponentially with a rate depending on the measurement scheme only. This contrasts with the known disappearance of entanglement after a finite time for the density matrix in the absence of measurements. For two qubits coupled to a common bath, the mean concurrence can vanish at discrete times. Our analysis applies to arbitrary quantum jump or quantum state diffusion dynamics in the Markov limit. We discuss the best measurement schemes to protect entanglement in specific examples.
Models of space averaged energetics of plates
NASA Technical Reports Server (NTRS)
Bouthier, O. M.; Bernhard, R. J.
1990-01-01
The analysis of high frequency vibrations in plates is of particular interest in the study of structure borne noise in aircrafts. The current methods of analysis are either too expensive (finite element method) or may have a confidence band wider than desirable (Statistical Energy Analysis). An alternative technique to model the space and time averaged response of structural acoustics problems with enough detail to include all significant mechanisms of energy generation, transmission, and absorption is highly desirable. The focus of this paper is the development of a set of equations which govern the space and time averaged energy density in plates. To solve this equation, a new type of boundary value problem must be treated in terms of energy density variables using energy and intensity boundary conditions. A computer simulation verification study of the energy governing equation is performed. A finite element formulation of the new equations is also implemented and several test cases are analyzed and compared to analytical solutions.
ERIC Educational Resources Information Center
Circle, David
2005-01-01
When a teacher gives their time to a student, it is more significant to that student than anything else one could do for him or her. Music teachers deal with time all the time. Someone once said that "time is like money: we never have enough." This may seem true; however, time is not like money. One can make more money, but one cannot "make time."â€¦
Reflight of the First Microgravity Science Laboratory: Quick Turnaround of a Space Shuttle Mission
NASA Technical Reports Server (NTRS)
Simms, Yvonne
1998-01-01
Due to the short flight of Space Shuttle Columbia, STS-83, in April 1997, NASA chose to refly the same crew, shuttle, and payload on STS-94 in July 1997. This was the first reflight of an entire mission complement. The reflight of the First Microgravity Science Laboratory (MSL-1) on STS-94 required an innovative approach to Space Shuttle payload ground processing. Ground processing time for the Spacelab Module, which served as the laboratory for MSL-1 experiments, was reduced by seventy-five percent. The Spacelab Module is a pressurized facility with avionics and thermal cooling and heating accommodations. Boeing-Huntsville, formerly McDonnell Douglas Aerospace, has been the Spacelab Integration Contractor since 1977. The first Spacelab Module flight was in 1983. An experienced team determined what was required to refurbish the Spacelab Module for reflight. Team members had diverse knowledge, skills, and background. An engineering assessment of subsystems, including mechanical, electrical power distribution, command and data management, and environmental control and life support, was performed. Recommendations for resolution of STS-83 Spacelab in-flight anomalies were provided. Inspections and tests that must be done on critical Spacelab components were identified. This assessment contributed to the successful reflight of MSL-1, the fifteenth Spacelab Module mission.
Orbit Averaging in Perturbed Planetary Rings
NASA Astrophysics Data System (ADS)
Stewart, Glen R.
2015-11-01
The orbital period is typically much shorter than the time scale for dynamical evolution of large-scale structures in planetary rings. This large separation in time scales motivates the derivation of reduced models by averaging the equations of motion over the local orbit period (Borderies et al. 1985, Shu et al. 1985). A more systematic procedure for carrying out the orbit averaging is to use Lie transform perturbation theory to remove the dependence on the fast angle variable from the problem order-by-order in epsilon, where the small parameter epsilon is proportional to the fractional radial distance from exact resonance. This powerful technique has been developed and refined over the past thirty years in the context of gyrokinetic theory in plasma physics (Brizard and Hahm, Rev. Mod. Phys. 79, 2007). When the Lie transform method is applied to resonantly forced rings near a mean motion resonance with a satellite, the resulting orbit-averaged equations contain the nonlinear terms found previously, but also contain additional nonlinear self-gravity terms of the same order that were missed by Borderies et al. and by Shu et al. The additional terms result from the fact that the self-consistent gravitational potential of the perturbed rings modifies the orbit-averaging transformation at nonlinear order. These additional terms are the gravitational analog of electrostatic ponderomotive forces caused by large amplitude waves in plasma physics. The revised orbit-averaged equations are shown to modify the behavior of nonlinear density waves in planetary rings compared to the previously published theory. This reserach was supported by NASA's Outer Planets Reserach program.
The modulated average structure of mullite.
Birkenstock, Johannes; Pet?í?ek, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82?(3))Si(1.18?(1))O(9.59?(5))) and by average structure refinements. 8?(5) to 20?(13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137?(2) 0 ˝], q2 = [0 0.4021?(5) 0.1834?(2)] and q3 = [0 0.4009?(5) -0.1834?(2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real structure of mullite is locally ordered (as previously known), but on the long-range its average is not completely disordered, the modulated structure of mullite may be denoted the true 'average structure of mullite'. PMID:26027012
Averaging on Earth-Crossing Orbits
NASA Astrophysics Data System (ADS)
Gronchi, G. F.; Milani, A.
The orbits of planet-crossing asteroids (and comets) can undergo close approaches and collisions with some major planet. This introduces a singularity in the N-body Hamiltonian, and the averaging of the equations of motion, traditionally used to compute secular perturbations, is undefined. We show that it is possible to define in a rigorous way some generalised averaged equations of motion, in such a way that the generalised solutions are unique and piecewise smooth. This is obtained, both in the planar and in the three-dimensional case, by means of the method of extraction of the singularities by Kantorovich. The modified distance used to approximate the singularity is the one used by Wetherill in his method to compute probability of collision. Some examples of averaged dynamics have been computed; a systematic exploration of the averaged phase space to locate the secular resonances should be the next step. `Alice sighed wearily. ``I think you might do something better with the time'' she said, ``than waste it asking riddles with no answers'' (Alice in Wonderland, L. Carroll)
Cosmological Measures with Volume Averaging
NASA Astrophysics Data System (ADS)
Page, Don N.
It has been common for cosmologists to advocate volume weighting for the cosmological measure problem, weighting spatial hypersurfaces by their volume. However, this often leads to the Boltzmann brain problem, that almost all observations would be by momentary Boltzmann brains that arise very briefly as quantum fluctuations in the late universe when it has expanded to a huge size, so that our observations (too ordered for Boltzmann brains) would be highly atypical and unlikely. Here it is suggested that volume weighting may be a mistake. Volume averaging is advocated as an alternative. One consequence may be a loss of the argument that eternal inflation gives a nonzero probability that our universe now has infinite volume.
Achronal averaged null energy condition
Graham, Noah; Olum, Ken D.
2007-09-15
The averaged null energy condition (ANEC) requires that the integral over a complete null geodesic of the stress-energy tensor projected onto the geodesic tangent vector is never negative. This condition is sufficient to prove many important theorems in general relativity, but it is violated by quantum fields in curved spacetime. However there is a weaker condition, which is free of known violations, requiring only that there is no self-consistent spacetime in semiclassical gravity in which ANEC is violated on a complete, achronal null geodesic. We indicate why such a condition might be expected to hold and show that it is sufficient to rule out closed timelike curves and wormholes connecting different asymptotically flat regions.
ERIC Educational Resources Information Center
Hood, Lucy
2006-01-01
This article describes the Soddy-Daisy High School in southeastern Tennessee. It used to be that vocational training and a focus on academic studies were considered completely different means of education. But in Soddy-Daisy, Tennessee, the two go hand in hand. Eric Thomas and his brother Mark, teach side by side in adjacent rooms, where computer…
Disk-averaged synthetic spectra of Mars
NASA Technical Reports Server (NTRS)
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Auto-exploratory average reward reinforcement learning
Ok, DoKyeong; Tadepalli, P.
1996-12-31
We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this {open_quotes}Auto-exploratory H-learning{close_quotes} performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration.
NASA Astrophysics Data System (ADS)
Amoroso, Richard L.
2013-09-01
The concept of time in the `clockwork' Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.
Predictabilty of time averages: The influence of the boundary forcing
NASA Technical Reports Server (NTRS)
Shukla, J.
1982-01-01
The physical mechanisms through which changes in the boundary forcings of SST, soil moisture, albedo, sea ice, and snow influence the atmospheric circulation are discussed. Results of numerical experiments conducted with the GLAS climate model to determine the sensitivity of the model atmosphere to changes in boundary conditions of SST, soil moisture, and albedo over limited regions are dicussed. It is found that changes in SST and soil moisture in the tropic produce large changes in the atmospheric circulation and rainfall over the tropics as well as over mid-latitudes.
Effect of wind averaging time on wind erosivity estimation
Technology Transfer Automated Retrieval System (TEKTRAN)
The Wind Erosion Prediction System (WEPS) and Revised Wind Erosion Equation (RWEQ) are widely used for estimating the wind-induced soil erosion at a field scale. Wind is the principal erosion driver in the two models. The wind erosivity, which describes the capacity of wind to cause soil erosion is ...
Global atmospheric circulation statistics: Four year averages
NASA Technical Reports Server (NTRS)
Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.
1987-01-01
Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.
Average deployments versus missile and defender parameters
Canavan, G.H.
1991-03-01
This report evaluates the average number of reentry vehicles (RVs) that could be deployed successfully as a function of missile burn time, RV deployment times, and the number of space-based interceptors (SBIs) in defensive constellations. Leakage estimates of boost-phase kinetic-energy defenses as functions of launch parameters and defensive constellation size agree with integral predictions of near-exact calculations for constellation sizing. The calculations discussed here test more detailed aspects of the interaction. They indicate that SBIs can efficiently remove about 50% of the RVs from a heavy missile attack. The next 30% can removed with two-fold less effectiveness. The next 10% could double constellation sizes. 5 refs., 7 figs.
Transforming Schools through Expanded Learning Time: Orchard Gardens K-8 Pilot School. Update 2013
ERIC Educational Resources Information Center
Chan, Roy
2013-01-01
For years, Orchard Gardens K-8 Pilot School was plagued by low student achievement and high staff turnover. Then, in 2010, with an expanded school schedule made possible through federal funding, Orchard Gardens began a remarkable turnaround. Today, the school is demonstrating how increased learning time, combined with other key turnaroundâ€¦
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 91.1304 Section 91.1304... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive credit balance for a model year. Positive credits to be used in averaging may be obtained from...
Self-similarity of higher-order moving averages
NASA Astrophysics Data System (ADS)
Arianos, Sergio; Carbone, Anna; TĂĽrk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Adaptive common average filtering for myocontrol applications.
Rehbaum, Hubertus; Farina, Dario
2015-02-01
The use of electromyography (EMG) for the control of upper-limb prostheses has received great interest in neurorehabilitation engineering since decades. Important advances have been performed in the development of machine learning algorithms for myocontrol. This paper describes a novel adaptive filter for EMG preprocessing to be applied as conditioning stage for optimal subsequent information extraction. The aim of this filter is to improve both the quality (signal-to-noise ratio) and the selectivity of the EMG recordings. The filter is based on the classic common average reference (CAR), often used in EEG processing. However, while CAR is stationary, the proposed filter, which is referred to as adaptive common average reference (ACAR), is signal-dependent and its spatial transfer function is adapted over time. The ACAR filter is evaluated in this study for noise reduction and selectivity. Furthermore, it is proven that its application improves the performance of both pattern recognition and regression methods for myoelectric control. It is concluded that the proposed novel filter for EMG conditioning is a useful preprocessing tool in myocontrol applications. PMID:25388778
Average observational quantities in the timescape cosmology
Wiltshire, David L.
2009-12-15
We examine the properties of a recently proposed observationally viable alternative to homogeneous cosmology with smooth dark energy, the timescape cosmology. In the timescape model cosmic acceleration is realized as an apparent effect related to the calibration of clocks and rods of observers in bound systems relative to volume-average observers in an inhomogeneous geometry in ordinary general relativity. The model is based on an exact solution to a Buchert average of the Einstein equations with backreaction. The present paper examines a number of observational tests which will enable the timescape model to be distinguished from homogeneous cosmologies with a cosmological constant or other smooth dark energy, in current and future generations of dark energy experiments. Predictions are presented for comoving distance measures; H(z); the equivalent of the dark energy equation of state, w(z); the Om(z) measure of Sahni, Shafieloo, and Starobinsky; the Alcock-Paczynski test; the baryon acoustic oscillation measure, D{sub V}; the inhomogeneity test of Clarkson, Bassett, and Lu; and the time drift of cosmological redshifts. Where possible, the predictions are compared to recent independent studies of similar measures in homogeneous cosmologies with dark energy. Three separate tests with indications of results in possible tension with the {lambda}CDM model are found to be consistent with the expectations of the timescape cosmology.
ERIC Educational Resources Information Center
Molfese, Dennis L.; Key, Alexandra Fonaryova; Kelly, Spencer; Cunningham, Natalie; Terrell, Shona; Ferguson, Melissa; Molfese, Victoria J.; Bonebright, Terri
2006-01-01
Event-related potentials (ERPs) were recorded from 27 children (14 girls, 13 boys) who varied in their reading skill levels. Both behavior performance measures recorded during the ERP word classification task and the ERP responses themselves discriminated between children with above-average, average, and below-average reading skills. ERP…
Digital data averager improves conventional measurement system performance
NASA Technical Reports Server (NTRS)
Naylor, T. K.; Roberts, J. A., Jr.; Schellenbach, R. R.
1968-01-01
Multipurpose digital averager provides measurement improvement in noisy signal environments. It provides increased measurement accuracy and resolution to basic instrumentation devices by an arithmetical process in real time. It is used with standard conventional measurement equipment and digital data printers.
Average kinetic energy of the superconducting state
NASA Astrophysics Data System (ADS)
Doria, Mauro M.; Salem-Sugui, S.; de Oliveira, I. G.; Ghivelder, L.; Brandt, E. H.
2002-04-01
Isothermal magnetization curves are plotted as the magnetization times the magnetic induction, 4?M.B, versus the applied field H. We show here that this curve is the average kinetic energy of the superconducting state versus the applied field, for type-II superconductors with a high Ginzburg-Landau parameter ?. The maximum of 4?M.B occurs at a field H*, directly related to the upper critical field Hc2, suggesting that Hc2(T) may be extracted from such plots even in cases when it is too high for direct measurement. We obtain these plots both theoretically, from the Ginzburg-Landau theory, and experimentally, using a Niobium sample with Tc=8.5 K, and compare them.
Average Lorentz self-force from electric field lines
NASA Astrophysics Data System (ADS)
Aashish, Sandeep; Haque, Asrarul
2015-09-01
We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM Â§ 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM Â§ 76.11 Emissions averaging. (a) General... averaging plan is in compliance with the Acid Rain emission limitation for NOX under the plan only if...
RHIC BPM system average orbit calculations
Michnoff,R.; Cerniglia, P.; Degen, C.; Hulsart, R.; et al.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed just prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.
Averaging in LRS class II spacetimes
NASA Astrophysics Data System (ADS)
Kašpar, Petr; Svítek, Otakar
2015-02-01
We generalize Buchert's averaged equations (Gen Relativ Gravit 32; 105, 2000; Gen Relativ Gravit 33; 1381, 2001) to LRS class II dust model in the sense that all Einstein equations are averaged, not only the trace part. We derive the relevant averaged equations and we investigate backreaction on expansion and shear scalars in an approximate LTB model. Finally we propose a way to close the system of averaged equations.
Averaging and Adding in Children's Worth Judgements
ERIC Educational Resources Information Center
Schlottmann, Anne; Harman, Rachel M.; Paine, Julie
2012-01-01
Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,â€¦
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Low-frequency conductivity in the average-atom approximation
NASA Astrophysics Data System (ADS)
Johnson, W. R.
2009-06-01
The quantum-mechanical average-atom model is reviewed and applied to determine scattering phase shifts, mean-free paths, and relaxation times in warm-dense plasmas. Static conductivities ? are based on an average-atom version of the Ziman formula. Applying linear response to the average-atom model leads to an average-atom version of the Kubo-Greenwood formula for the frequency-dependent conductivity ?( ?). The free-free contribution to ?( ?) is found to diverge as 1/ ?2 at low frequencies; however, considering effects of multiple scattering leads to a modified version of ?( ?) that is finite and reduces to the Ziman formula at ? = 0. The resulting average-atom version of the Kubo-Greenwood formula satisfies the conductivity sum rule. The dielectric function ?( ?) and the complex index of refraction n( ?) + i?( ?) are inferred from ?( ?) using dispersion relations. Applications to anomalous dispersion in laser-produced plasmas are discussed.
Do diurnal aerosol changes affect daily average radiative forcing?
NASA Astrophysics Data System (ADS)
Kassianov, Evgueni; Barnard, James; Pekour, Mikhail; Berg, Larry K.; Michalsky, Joseph; Lantz, Kathy; Hodges, Gary
2013-06-01
diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement
Jones, Robert E
2015-01-01
Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)â€”8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system. PMID:25768807
General periodic average structures of decagonal quasicrystals.
Cervellino, Antonio; Steurer, Walter
2002-03-01
The concept of periodic average structure is mutated from the theory of incommensurately modulated structures. For quasicrystals, this concept (up to now explored only in few cases) is becoming increasingly useful to understand their properties and to interpret some important structural features. The peculiar property of quasicrystals is that they admit not one but many (infinite) possible different average structures. Few of them, however, will be meaningful. Here are given a simple method (based on reciprocal space) for generating all the possible periodic average structures of decagonal quasicrystals and some new ideas about their meaning. By this method, the most significant average structures can be recognized from the diffraction pattern. PMID:11832588
Cell averaging Chebyshev methods for hyperbolic problems
NASA Technical Reports Server (NTRS)
Wei, Cai; Gottlieb, David; Harten, Ami
1990-01-01
A cell averaging method for the Chebyshev approximations of first order hyperbolic equations in conservation form is described. Formulas are presented for transforming between pointwise data at the collocation points and cell averaged quantities, and vice-versa. This step, trivial for the finite difference and Fourier methods, is nontrivial for the global polynomials used in spectral methods. The cell averaging methods presented are proven stable for linear scalar hyperbolic equations and present numerical simulations of shock-density wave interaction using the new cell averaging Chebyshev methods.
Spectral and parametric averaging for integrable systems
NASA Astrophysics Data System (ADS)
Ma, Tao; Serota, R. A.
2015-05-01
We analyze two theoretical approaches to ensemble averaging for integrable systems in quantum chaos, spectral averaging (SA) and parametric averaging (PA). For SA, we introduce a new procedure, namely, rescaled spectral averaging (RSA). Unlike traditional SA, it can describe the correlation function of spectral staircase (CFSS) and produce persistent oscillations of the interval level number variance (IV). PA while not as accurate as RSA for the CFSS and IV, can also produce persistent oscillations of the global level number variance (GV) and better describes saturation level rigidity as a function of the running energy. Overall, it is the most reliable method for a wide range of statistics.
Modelling and designing digital control systems with averaged measurements
NASA Technical Reports Server (NTRS)
Polites, Michael E.; Beale, Guy O.
1988-01-01
An account is given of the control systems engineering methods applicable to the design of digital feedback controllers for aerospace deterministic systems in which the output, rather than being an instantaneous measure of the system at the sampling instants, instead represents an average measure of the system over the time interval between samples. The averaging effect can be included during the modeling of the plant, thereby obviating the iteration of design/simulation phases.
Average oxidation state of carbon in proteins
Dick, Jeffrey M.
2014-01-01
The formal oxidation state of carbon atoms in organic molecules depends on the covalent structure. In proteins, the average oxidation state of carbon (ZC) can be calculated as an elemental ratio from the chemical formula. To investigate oxidationâ€“reduction (redox) patterns, groups of proteins from different subcellular locations and phylogenetic groups were selected for comparison. Extracellular proteins of yeast have a relatively high oxidation state of carbon, corresponding with oxidizing conditions outside of the cell. However, an inverse relationship between ZC and redox potential occurs between the endoplasmic reticulum and cytoplasm. This trend provides support for the hypothesis that protein transport and turnover are ultimately coupled to the maintenance of different glutathione redox potentials in subcellular compartments. There are broad changes in ZC in whole-genome protein compositions in microbes from different environments, and in Rubisco homologues, lower ZC tends to occur in organisms with higher optimal growth temperature. Energetic costs calculated from thermodynamic models are consistent with the notion that thermophilic organisms exhibit molecular adaptation to not only high temperature but also the reducing nature of many hydrothermal fluids. Further characterization of the material requirements of protein metabolism in terms of the chemical conditions of cells and environments may help to reveal other linkages among biochemical processes with implications for changes on evolutionary time scales. PMID:25165594
Twenty-year outcome for individuals with autism and average or near-average cognitive abilities.
Farley, Megan A; McMahon, William M; Fombonne, Eric; Jenson, William R; Miller, Judith; Gardner, Michael; Block, Heidi; Pingree, Carmen B; Ritvo, Edward R; Ritvo, Riva Arielle; Coon, Hilary
2009-04-01
Previous studies found substantial variability in adult outcome for people with autism whose cognitive functioning was within the near-average and average ranges. This study examined adult outcome for 41 such individuals (38 men and 3 women) originally identified through an epidemiological survey of autism in Utah. Mean age at the time of their previous cognitive assessment was 7.2 years (SD=4.1, range=3.1-25.9 years) and at follow-up was 32.5 years (SD=5.7 years, range=22.3-46.4 years). Outcome measures included standardized assessments of diagnostic status, cognitive ability, and adaptive behavior. Additional information collected concerned demographic variables, indicators of independence, social relationships, medical and psychiatric conditions, and social service use. Outcomes for this sample were better than outcomes described in previous work on individuals with similar cognitive functioning. For example, half of the participants were rated as "Very Good" or "Good" on a global outcome measure. As in previous studies, there was considerable variability in measured cognitive ability over time. Over half of the sample had large gains or losses of cognitive ability of greater than 1 standard deviation. Cognitive gain was associated with better outcome, as was better adaptive functioning. While all participants had baseline IQs in the nonimpaired range, there was limited evidence to support the use of other early childhood variables to predict adult outcome. PMID:19455645
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2014 CFR
2014-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2013 CFR
2013-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM Â§ 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the...
Average Transmission Probability of a Random Stack
ERIC Educational Resources Information Center
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Averaging. 90.204 Section 90.204 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Certification Averaging, Banking,...
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by…
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator may average TF emissions from potlines and demonstrate compliance with the limits in Table 1 of... operator also may average POM emissions from potlines and demonstrate compliance with the limits in Table 2... limit in Table 1 of this subpart (for TF emissions) and/or Table 2 of this subpart (for POM...
Determinants of College Grade Point Averages
ERIC Educational Resources Information Center
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also byâ€¦
40 CFR 63.846 - Emission averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
... averaging. (a) General. The owner or operator of an existing potline or anode bake furnace in a State that... by total aluminum production. (c) Anode bake furnaces. The owner or operator may average TF emissions from anode bake furnaces and demonstrate compliance with the limits in Table 3 of this subpart...
Evaluating template bias when synthesizing population averages
NASA Astrophysics Data System (ADS)
Carlson, Blake L.; Christensen, Gary E.; Johnson, Hans J.; Vannier, Michael W.
2001-07-01
Establishing the average shape and spatial variability for a set of similar anatomical objects is important for detecting and discriminating morphological differences between populations. This may be done using deformable templates to synthesize a 3D CT/MRI image of the average anatomy from a set of CT/MRI images collected from a population of similar anatomical objects. This paper investigates the error associated with the choice of template selected from the population used to synthesize the average population shape. Population averages were synthesized for a population of five infant skulls with sagittal synostosis and a population of six normal adult brains using a consistent linear-elastic image registration algorithm. Each data set from the populations was used as the template to synthesize a population average. This resulted in five different population averages for the skull population and six different population averages for the brain population. The displacement variance distance from a skull within the population to the other skulls in the population ranged from 5.5 to 9.9 mm2 while the displacement variance distance from the synthesized average skulls to the population ranged from 2.2 to 2.7 mm2. The displacement variance distance from a brain within the population to the other brains in the population ranged from 9.3 to 14.2 mm2 while the displacement variance distance from the synthesized average brains to the population ranged from 3.2 to 3.6 mm2. These results suggest that there was no significant difference between the choice of template with respect to the shape of the synthesized average data set for these two populations.
Patel, Robin
2013-08-01
Despite widespread application of nucleic acid diagnostics, cultures remain integral in modern laboratories. Because cultures detect a large number of organism types, it is unlikely that they will disappear from clinical practice in the near future. Their downside is slow turn-around time, impacted by time to growth and identification of that growth. The latter is expedited using a new proteomic technology, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). PMID:23595835
Greninger, Alexander L; Naccache, Samia N; Federman, Scot; Yu, Guixia; Mbala, Placide; Bres, Vanessa; Stryke, Doug; Bouquet, Jerome; Somasekar, Sneha; Linnen, Jeffrey M; Dodd, Roger; Mulembakani, Prime; Schneider, Bradley S; Muyembe-Tamfum, Jean-Jacques; Stramer, Susan L; Chiu, Charles Y
2015-01-01
We report unbiased metagenomic detection of chikungunya virus (CHIKV), Ebola virus (EBOV), and hepatitis C virus (HCV) from four human blood samples by MinION nanopore sequencing coupled to a newly developed, web-based pipeline for real-time bioinformatics analysis on a computational server or laptop (MetaPORE). At titers ranging from 10(7)-10(8) copies per milliliter, reads to EBOV from two patients with acute hemorrhagic fever and CHIKV from an asymptomatic blood donor were detected within 4 to 10 min of data acquisition, while lower titer HCV virus (1?×?10(5) copies per milliliter) was detected within 40 min. Analysis of mapped nanopore reads alone, despite an average individual error rate of 24 % (range 8-49 %), permitted identification of the correct viral strain in all four isolates, and 90 % of the genome of CHIKV was recovered with 97-99 % accuracy. Using nanopore sequencing, metagenomic detection of viral pathogens directly from clinical samples was performed within an unprecedented <6 hr sample-to-answer turnaround time, and in a timeframe amenable to actionable clinical and public health diagnostics. PMID:26416663
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics. PMID:18999811
The uncertainty of simple spatial averages using rain gauge networks
Morrissey, M.L.; Maliekal, J.A.; Greene, J.S.
1995-08-01
A technique is presented whereby the standard error of a simple spatial average can be computed given any spatial configuration of measuring sites. The method should improve useful to those wishing to assess the time-averaged performance of a network of rain gauges where the site locations are fixed within the averaging area. It is demonstrated that in addition to the site density and the spatial structure of the measured variable, the spatial geometry of the network with respect to the averaging area must be accounted for in the standard error expression. By incorporating estimates of the long-term population mean in the method, bias and random error in the sample statistics are minimized. 18 refs., 9 figs.
Spacetime Average Density (SAD) cosmological measures
Page, Don N.
2014-11-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmological constant.
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM Â§ 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
40 CFR 76.11 - Emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM Â§ 76.11 Emissions averaging. (a) General... compliance with the Acid Rain emission limitation for NOX under the plan only if the following...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H. Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Modeling Plants With Moving-Average Outputs
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
Three discrete-state-variable representations derived. Paper discusses mathematical modeling of digital control systems for plants in which outputs include combinations of instantaneous and moving-average-prefiltered measurements.
Applications of high average power nonlinear optics
Velsko, S.P.; Krupke, W.F.
1996-02-05
Nonlinear optical frequency convertors (harmonic generators and optical parametric oscillators are reviewed with an emphasis on high average power performance and limitations. NLO materials issues and NLO device designs are discussed in reference to several emerging scientific, military and industrial commercial applications requiring {approx} 100 watt average power level in the visible and infrared spectral regions. Research efforts required to enable practical {approx} 100 watt class NLO based laser systems are identified.
Monthly average polar sea-ice concentration
Schweitzer, Peter N.
1995-01-01
The data contained in this CD-ROM depict monthly averages of sea-ice concentration in the modern polar oceans. These averages were derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) instruments aboard satellites of the U.S. Air Force Defense Meteorological Satellite Program from 1978 through 1992. The data are provided as 8-bit images using the Hierarchical Data Format (HDF) developed by the National Center for Supercomputing Applications.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
NASA Technical Reports Server (NTRS)
Moore, Franklin K; Ostrach, Simon
1957-01-01
The time-average characteristics of boundary layers over a flat plate in nearly quasi-steady flow are determined. The plate may be either insulated or isothermal. The time averages are found without specifying the plate velocity explicitly except that it is positive and has an average value.
Conditional simulation of geologically averaged block permeabilities
NASA Astrophysics Data System (ADS)
Journel, A. G.
1996-08-01
Currently available hardware and software for flow simulation can handle up to hundreds of thousands of blocks, or more comfortably tens of thousands of blocks. This limits the discretization of the reservoir model to an extremely coarse grid, say 200 Ă— 200 Ă— 25 for 10 6 blocks. Such a coarse grid cannot represent the structural and petrophysical variability at the resolution provided to geologists by well logs and outcrops. Thus there is no alternative to averaging the impact of all small-scale, within-block, heterogeneities into block 'pseudos' or average values. The flow simulator will account for geological description only through those pseudos, hence detailed modelling of geological heterogeneity should not go beyond the information that block pseudos can carry, at least for flow simulation purposes. It is suggested that the present drive in outcrop sampling be clearly redirected at evaluating 'geopseudos', i.e. at evaluating how small-scale variability (both structural and petrophysical) of typical depositional units averages out into large blocks' effective transmissivities and relative permeabilities. Outcrop data would allow the building of generic, high-resolution, numerical models of the geo-variability within a typical depositional unit: this is where geology intervenes. Then, this numerical model would be input into a generic flow simulator, single or multiphase, yielding genetic block averages, for blocks of various sizes and geometries: this is where the reservoir engineer intervenes. Next, the spatial statistics of these block averages (histograms, variograms, â€¦) would be inferred: this is where the geostatistician intervenes. Last comes the problem of filling-in the actual reservoir volume with simulated block averages specific to each depositional unit. Because each reservoir is unique, random drawing of block average values from the previously inferred generic distributions would not be enough. The placement of block average values in the specific reservoir volume must be made conditional on local data whether well log, seismic or production-derived. This non-trivial task of 'conditional simulation' of block average is the challenge of both the reservoir geologist and geostatistician. This paper proposes an avenue of approach that draws from the pioneering works of Steve Begg at BP-Alaska (1992, 1994) and Jaime Gomez-Hernandez at Universidad of Valencia (1990, 1991).
State-Variable Representations For Moving-Average Sampling
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1991-01-01
Two state-variable representations derived for continuous-time plant driven by control algorithm including zero-order hold and measurements sampled at mutliple rates by multiple-input/multiple-output moving-average processes. New representations enhance observability and controllability of plant. Applications include mathematical modeling of navigation systems including star trackers, gyroscopes, and accelerometers.
Interpreting Sky-Averaged 21-cm Measurements
NASA Astrophysics Data System (ADS)
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation of global 21-cm signal measurements are detections of Lyman Alpha Emitters at high redshifts and constraints on the midpoint of reionization, both of which are among the primary science objectives of ongoing or near-future experiments.
Average Magnification Effect of Clumping of Matter
NASA Astrophysics Data System (ADS)
Kibble, T. W. B.; Lieu, Richard
2005-10-01
The aim of this paper is to reexamine the question of the average magnification in a universe with some inhomogeneously distributed matter. We present an analytic proof, valid under rather general conditions, including clumps of any shape and size and strong lensing, that as long as the clumps are uncorrelated, the average ``reciprocal'' magnification (in one of several possible senses) is precisely the same as in a homogeneous universe with an equal mean density. From this result, we also show that a similar statement can be made about one definition of the average ``direct'' magnification. We discuss, in the context of observations of discrete and extended sources, the physical significance of the various different measures of magnification and the circumstances in which they are appropriate.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Model Averaging Method for Supersaturated Experimental Design
NASA Astrophysics Data System (ADS)
Salaki, Deiby T.; Kurnia, Anang; Sartono, Bagus
2016-01-01
In this paper, a new modified model averaging method was proposed. The candidate model construction was performed by distinguishing the covariates into focus variables and auxiliary variables whereas the weights selection was implemented using Mallows criterion. In addition, the illustration result shows that the applied model averaging method could be considered as a new alternative method for supersaturated experimental design as a typical form of high dimensional data. A supersaturated factorial design is an experimental series in which the number of factors exceeds the number of runs, so its size is not enough to estimate all the main effect. By using the model averaging method, the estimation or prediction power is significantly enhanced. In our illustration, the main factors are regarded as focus variables in order to give more attention to them whereas the lesser factors are regarded as auxiliary variables, which is along with the hierarchical ordering principle in experimental research. The limited empirical study shows that this method produces good prediction.
Averaged energy conditions and evaporating black holes
Ford, L.H.; Roman, T.A.
1996-02-01
In this paper the averaged weak and averaged null energy conditions, together with uncertainty-principle-type restrictions on negative energy ({open_quote}{open_quote}quantum inequalities{close_quote}{close_quote}), are examined in the context of evaporating black hole backgrounds in both two and four dimensions. In particular, integrals over only half-geodesics are studied. We determine the regions of the spacetime in which the averaged energy conditions are violated. In all cases where these conditions fail, there appear to be quantum inequalities which bound the magnitude and extent of the negative energy, and hence the degree of the violation. The possible relevance of these results for the validity of singularity theorems in evaporating black hole spacetimes is discussed. {copyright} {ital 1996 The American Physical Society.}
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a â€literary misery indexâ€™ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources. PMID:26594695
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Average: the juxtaposition of procedure and context
NASA Astrophysics Data System (ADS)
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Polarized electron beams at milliampere average current
Poelker, Matthew
2013-11-01
This contribution describes some of the challenges associated with developing a polarized electron source capable of uninterrupted days-long operation at milliAmpere average beam current with polarization greater than 80%. Challenges will be presented in the context of assessing the required level of extrapolation beyond the performance of today's CEBAF polarized source operating at ~ 200 uA average current. Estimates of performance at higher current will be based on hours-long demonstrations at 1 and 4 mA. Particular attention will be paid to beam-related lifetime-limiting mechanisms, and strategies to construct a photogun that operate reliably at bias voltage > 350kV.
Initial Conditions in the Averaging Cognitive Model
ERIC Educational Resources Information Center
Noventa, S.; Massidda, D.; Vidotto, G.
2010-01-01
The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…
Bayesian Model Averaging for Propensity Score Analysis
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. Theâ€¦
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
Average Annual Rainfall Over the Globe
NASA Astrophysics Data System (ADS)
Agrawal, D. C.
2013-12-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74×1017 J of solar radiation per second and it is divided over various channels as given in Table 1. It keeps our planet warm and maintains its average temperature2 of 288 K with the help of the atmosphere in such a way that life can survive. It also recycles the water in the oceans/rivers/ lakes by initial evaporation and subsequent precipitation; the average annual rainfall over the globe is around one meter. According to M. King Hubbert the amount of solar power going into the evaporation and precipitation channel is 4.0×1016 W. Students can verify the value of average annual rainfall over the globe by utilizing this part of solar energy. This activity is described in the next section.
The periodic average structure of particular quasicrystals.
Steurer; Haibach
1999-01-01
The non-crystallographic symmetry of d-dimensional (dD) quasiperiodic structures is incompatible with lattice periodicity in dD physical space. However, dD quasiperiodic structures can be described as irrational sections of nD (n > d) periodic hypercrystal structures. By appropriate oblique projection of particular hypercrystal structures onto physical space, discrete periodic average structures can be obtained. The boundaries of the projected atomic surfaces give the maximum distance of each atom in a quasiperiodic structure from the vertices of the reference lattice of its average structure. These maximum distances turn out to be smaller than even the shortest atomic bond lengths. The metrics of the average structure of a 3D Ammann tiling, for instance, with edge lengths of the unit tiles equal to the bond lengths in elemental aluminium, correspond almost exactly to the metrics of face-centred-cubic aluminium. This is remarkable since most stable quasicrystals contain aluminium as the main constitutent. The study of the average structure of quasicrystals can be a valuable aid to the elucidation of the geometry of quasicrystal-to-crystal transformations. It can also contribute to the derivation of the physically most relevant Brillouin (Jones) zone. PMID:10927229
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Glenzinski, D.; /Fermilab
2008-01-01
This paper summarizes a talk given at the Top2008 Workshop at La Biodola, Isola d Elba, Italy. The status of the world average top-quark mass is discussed. Some comments about the challanges facing the experiments in order to further improve the precision are offered.
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly linesâ€¦
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.
BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.
2005-08-21
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.
NASA Astrophysics Data System (ADS)
1994-05-01
Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.
NASA Technical Reports Server (NTRS)
1994-01-01
Time averaged Stanton number and surface-pressure distributions are reported for the first-stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin-film heat flux gages were used to obtain the heat flux measurements, while miniature silicon-diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and a quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2014-12-01
The JPL Airborne Snow Observatory (ASO) is an integrated LIDAR and Spectrometer measuring snow depth and rate of snow melt in the Sierra Nevadas, specifically, the Tuolumne River Basin, Sierra Nevada, California above the O'Shaughnessy Dam of the Hetch Hetchy reservoir, and the Uncompahgre Basin, Colorado, amongst other sites. The ASO data was delivered to water resource managers from the California Department of Water Resources in under 24 hours from the time that the Twin Otter aircraft landed in Mammoth Lakes, CA to the time disks were plugged in to the ASO Mobile Compute System (MCS) deployed at the Sierra Nevada Aquatic Research Laboratory (SNARL) near the airport. ASO performed weekly flights and each flight took between 500GB to 1 Terabyte of raw data, which was then processed from level 0 data products all the way to full level 4 maps of Snow Water Equivalent, albedo mosaics, and snow depth from LIDAR. These data were produced by Interactive Data analysis Language (IDL) algorithms which were then unobtrusively and automatically integrated into an Apache OODT and Apache Tika based Big Data processing system. Data movement was both electronic and physical including novel uses of LaCie 1 and 2 TeraByte (TB) data bricks and deployment in rugged terrain. The MCS was controlled remotely from the Jet Propulsion Laboratory, California Institute of Technology (JPL) in Pasadena, California on behalf of the National Aeronautics and Space Administration (NASA). Communication was aided through the use of novel Internet Relay Chat (IRC) command and control mechanisms and through the use of the Notifico open source communication tools. This talk will describe the high powered, and light-weight Big Data processing system that we developed for ASO and its implications more broadly for airborne missions at NASA and throughout the government. The lessons learned from ASO show the potential to have a large impact in the development of Big Data processing systems in the years to come.
Evolution of the average avalanche shape with the universality class
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Mĺlřy, Knut Jřrgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571
Stochastic averaging and sensitivity analysis for two scale reaction networks.
Hashemi, Araz; Núńez, Marcel; Plechá?, Petr; Vlachos, Dionisios G
2016-02-21
In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process. PMID:26896973
Neutron average cross sections of {sup 237}Np
Noguere, G.
2010-04-15
This work reports {sup 237}Np neutron resonance parameters obtained from the simultaneous analysis of time-of-flight data measured at the GELINA, ORELA, KURRI, and LANSCE facilities. A statistical analysis of these resonances relying on average R-matrix and optical model calculations was used to establish consistent l-dependent average resonance parameters involved in the description of the unresolved resonance range of the {sup 237}Np neutron cross sections. For neutron orbital angular momentum l=0, we obtained an average radiation width
Stochastic averaging and sensitivity analysis for two scale reaction networks
NASA Astrophysics Data System (ADS)
Hashemi, Araz; NĂşĂ±ez, Marcel; PlechĂˇÄŤ, Petr; Vlachos, Dionisios G.
2016-02-01
In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.
Evolution of the average avalanche shape with the universality class.
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Mĺlřy, Knut Jřrgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics. PMID:24352571
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
From cellular doses to average lung dose.
Hofmann, W; Winkler-Heil, R
2015-11-01
Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions. PMID:25920789
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking. PMID:20224119
Apparent and average accelerations of the Universe
Bolejko, Krzysztof; Andersson, Lars E-mail: larsa@math.miami.edu
2008-10-15
In this paper we consider the relation between the volume deceleration parameter obtained within the Buchert averaging scheme and the deceleration parameter derived from supernova observation. This work was motivated by recent findings that showed that there are models which despite having {Lambda} = 0 have volume deceleration parameter q{sup vol}<0. This opens the possibility that back-reaction and averaging effects may be used as an interesting alternative explanation to the dark energy phenomenon. We have calculated q{sup vol} in some Lemaitre-Tolman models. For those models which are chosen to be realistic and which fit the supernova data, we find that q{sup vol}>0, while those models which we have been able to find which exhibit q{sup vol}<0 turn out to be unrealistic. This indicates that care must be exercised in relating the deceleration parameter to observations.
Average gluon and quark jet multiplicities
NASA Astrophysics Data System (ADS)
Kotikov, A. V.
2016-01-01
We show the results in [1, 2] for computing the QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new results came due a recent progress in timelike small-x resummation obtained in the M S Ż factorization scheme. They depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets demonstrates by its goodness how our results solve a longstandig problem of QCD. Including all the available theoretical input within our approach, ?s(5 ) (Mz)= 0.1199±0.0026 has been obtained in the M S Ż scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln x terms through the NNLL level and of ln Q2 terms by the renormalization group. This result is in excellent agreement with the present world average.
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a mean…
The Average Velocity in a Queue
ERIC Educational Resources Information Center
Frette, Vidar
2009-01-01
A number of cars drive along a narrow road that does not allow overtaking. Each driver has a certain maximum speed at which he or she will drive if alone on the road. As a result of slower cars ahead, many cars are forced to drive at speeds lower than their maximum ones. The average velocity in the queue offers a non-trivial example of a meanâ€¦
Stochastic Games with Average Payoff Criterion
Ghosh, M. K.; Bagchi, A.
1998-11-15
We study two-person stochastic games on a Polish state and compact action spaces and with average payoff criterion under a certain ergodicity condition. For the zero-sum game we establish the existence of a value and stationary optimal strategies for both players. For the nonzero-sum case the existence of Nash equilibrium in stationary strategies is established under certain separability conditions.
Average chemical composition of the lunar surface
NASA Technical Reports Server (NTRS)
Turkevich, A. L.
1973-01-01
The available data on the chemical composition of the lunar surface at eleven sites (3 Surveyor, 5 Apollo and 3 Luna) are used to estimate the amounts of principal chemical elements (those present in more than about 0.5% by atom) in average lunar surface material. The terrae of the moon differ from the maria in having much less iron and titanium and appreciably more aluminum and calcium.
Average Annual Rainfall over the Globe
ERIC Educational Resources Information Center
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 Ă—â€¦
Digital Averaging Phasemeter for Heterodyne Interferometry
NASA Technical Reports Server (NTRS)
Johnson, Donald; Spero, Robert; Shaklan, Stuart; Halverson, Peter; Kuhnert, Andreas
2004-01-01
A digital averaging phasemeter has been built for measuring the difference between the phases of the unknown and reference heterodyne signals in a heterodyne laser interferometer. This phasemeter performs well enough to enable interferometric measurements of distance with accuracy of the order of 100 pm and with the ability to track distance as it changes at a speed of as much as 50 cm/s. This phasemeter is unique in that it is a single, integral system capable of performing three major functions that, heretofore, have been performed by separate systems: (1) measurement of the fractional-cycle phase difference, (2) counting of multiple cycles of phase change, and (3) averaging of phase measurements over multiple cycles for improved resolution. This phasemeter also offers the advantage of making repeated measurements at a high rate: the phase is measured on every heterodyne cycle. Thus, for example, in measuring the relative phase of two signals having a heterodyne frequency of 10 kHz, the phasemeter would accumulate 10,000 measurements per second. At this high measurement rate, an accurate average phase determination can be made more quickly than is possible at a lower rate.
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
ERIC Educational Resources Information Center
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Job Opportunity average a CDC must... Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of one Job Opportunity per an amount of 504 loan funding that will be specified by SBA from time to...
Averaging spherically symmetric spacetimes in general relativity
Coley, A. A.; Pelavas, N.
2006-10-15
We discuss the averaging problem in general relativity, using the form of the macroscopic gravity equations in the case of spherical symmetry in volume preserving coordinates. In particular, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. On cosmological scales, the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background is found to be of the form of a spatial curvature. On astrophysical scales the correlation tensor can be interpreted as the sum of a spatial curvature and an anisotropic fluid. We briefly discuss the physical implications of these results.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine. PMID:23970171
Self-averaging in complex brain neuron signals
NASA Astrophysics Data System (ADS)
Bershadskii, A.; Dremencov, E.; Fukayama, D.; Yadid, G.
2002-12-01
Nonlinear statistical properties of Ventral Tegmental Area (VTA) of limbic brain are studied in vivo. VTA plays key role in generation of pleasure and in development of psychological drug addiction. It is shown that spiking time-series of the VTA dopaminergic neurons exhibit long-range correlations with self-averaging behavior. This specific VTA phenomenon has no relation to VTA rewarding function. Last result reveals complex role of VTA in limbic brain.
High average power diode pumped solid state lasers for CALIOPE
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory`s water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW`s 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL`s first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers.
Forecast of the Decadal Average Sunspot Number
NASA Astrophysics Data System (ADS)
Volobuev, D. M.; Makarenko, N. G.
2008-05-01
The forecast of the decadal average sunspot number (SN) becomes possible with an extension of telescopic observations based on proxy reconstructions using the tree ring radiocarbon data during the Holocene. These decadal numbers (SNRC) provide a powerful statistic to verify the forecasting methods. Complicated dynamics of long-term solar activity and noise of proxy-based reconstruction make the one-step-ahead forecast challenging for any forecasting method. Here we construct a continuous data set of SNRC which extends the group sunspot number and the international sunspot number. The known technique of nonlinear forecast, the local linear approximation, is adapted to estimate the coming SN. Both the method and the continuous data set were tested and tuned to obtain the minimum of a normalized average prediction error ( E) during the last millennium using several past millennia as a training data set. E=0.58 ? D is achieved to forecast the SN successive differences whose standard deviation is ? D=7.39 for the period of training. This corresponds to the correlation ( r=0.97) between true and forecasted SN. This error is significantly smaller than the prediction error when the surrogate data were used for the training data set, and proves the nonlinearity in the decadal SN. The estimated coming SN is smaller than the previous one.