Sample records for typical running time

  1. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    ERIC Educational Resources Information Center

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  2. Development and testing of a new system for assessing wheel-running behaviour in rodents.

    PubMed

    Chomiak, Taylor; Block, Edward W; Brown, Andrew R; Teskey, G Campbell; Hu, Bin

    2016-05-05

    Wheel running is one of the most widely studied behaviours in laboratory rodents. As a result, improved approaches for the objective monitoring and gathering of more detailed information is increasingly becoming important for evaluating rodent wheel-running behaviour. Here our aim was to develop a new quantitative wheel-running system that can be used for most typical wheel-running experimental protocols. Here we devise a system that can provide a continuous waveform amenable to real-time integration with a high-speed video ideal for wheel-running experimental protocols. While quantification of wheel running behaviour has typically focused on the number of revolutions per unit time as an end point measure, the approach described here allows for more detailed information like wheel rotation fluidity, directionality, instantaneous velocity, and acceleration, in addition to total number of rotations, and the temporal pattern of wheel-running behaviour to be derived from a single trace. We further tested this system with a running-wheel behavioural paradigm that can be used for investigating the neuronal mechanisms of procedural learning and postural stability, and discuss other potentially useful applications. This system and its ability to evaluate multiple wheel-running parameters may become a useful tool for screening new potentially important therapeutic compounds related to many neurological conditions.

  3. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  4. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    NASA Astrophysics Data System (ADS)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  5. A Method for Generating Reduced Order Linear Models of Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Chicatelli, Amy; Hartley, Tom T.

    1997-01-01

    For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.

  6. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  7. Simulation of linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.

    1993-01-01

    A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.

  8. PVIScreen

    EPA Pesticide Factsheets

    PVIScreen extends the concepts of a prior model (BioVapor), which accounted for oxygen-driven biodegradation of multiple constituents of petroleum in the soil above the water table. Typically, the model is run 1000 times using various factors.

  9. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  10. Running of the scalar spectral index in bouncing cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehners, Jean-Luc; Wilson-Ewing, Edward, E-mail: jean-luc.lehners@aei.mpg.de, E-mail: wilson-ewing@aei.mpg.de

    We calculate the running of the scalar index in the ekpyrotic and matter bounce cosmological scenarios, and find that it is typically negative for ekpyrotic models, while it is typically positive for realizations of the matter bounce where multiple fields are present. This can be compared to inflation, where the observationally preferred models typically predict a negative running. The magnitude of the running is expected to be between 10{sup −4} and up to 10{sup −2}, leading in some cases to interesting expectations for near-future observations.

  11. Physiological, Biomechanical, and Maximal Performance Evaluation of Medium Rucksack Prototypes

    DTIC Science & Technology

    2013-07-01

    injuries that limit the ROM about the shoulder, hip, knee , or ankle joint, were excluded from participation. Volunteers abstained from heavy and...time histories of running strides differ among individuals. Individuals who make initial contact with their heels ( heel - strike runners) show a...and a relatively large force. In the current data set, not all volunteers displayed the impact peak that is typical of heel - strike running. The

  12. Assessment and Grading in Physical Education

    ERIC Educational Resources Information Center

    Mohnsen, Bonnie

    2006-01-01

    This article discusses the basis for assessing and grading students in physical education. Although students should dress appropriately for physical education, be physically active during class time, and improve their fitness (e.g., mile-run time), these items are typically not included in the physical education content standards. The vast…

  13. Real Time Linux - The RTOS for Astronomy?

    NASA Astrophysics Data System (ADS)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.

  14. A High-Throughput Biological Calorimetry Core: Steps to Startup, Run, and Maintain a Multiuser Facility.

    PubMed

    Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C

    2016-01-01

    Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.

  15. The virtual slice setup.

    PubMed

    Lytton, William W; Neymotin, Samuel A; Hines, Michael L

    2008-06-30

    In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.

  16. PPC750 Performance Monitor

    NASA Technical Reports Server (NTRS)

    Meyer, Donald; Uchenik, Igor

    2007-01-01

    The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer

  17. Interactions between hyporheic flow produced by stream meanders, bars, and dunes

    USGS Publications Warehouse

    Stonedahl, Susa H.; Harvey, Judson W.; Packman, Aaron I.

    2013-01-01

    Stream channel morphology from grain-scale roughness to large meanders drives hyporheic exchange flow. In practice, it is difficult to model hyporheic flow over the wide spectrum of topographic features typically found in rivers. As a result, many studies only characterize isolated exchange processes at a single spatial scale. In this work, we simulated hyporheic flows induced by a range of geomorphic features including meanders, bars and dunes in sand bed streams. Twenty cases were examined with 5 degrees of river meandering. Each meandering river model was run initially without any small topographic features. Models were run again after superimposing only bars and then only dunes, and then run a final time after including all scales of topographic features. This allowed us to investigate the relative importance and interactions between flows induced by different scales of topography. We found that dunes typically contributed more to hyporheic exchange than bars and meanders. Furthermore, our simulations show that the volume of water exchanged and the distributions of hyporheic residence times resulting from various scales of topographic features are close to, but not linearly additive. These findings can potentially be used to develop scaling laws for hyporheic flow that can be widely applied in streams and rivers.

  18. The effect of unilateral arm swing motion on lower extremity running mechanics associated with injury risk.

    PubMed

    Agresta, Cristine; Ward, Christian R; Wright, W Geoffrey; Tucker, Carole A

    2018-06-01

    Many field sports involve equipment that restricts one or both arms from moving while running. Arm swing during running has been examined from a biomechanical and physiologic perspective but not from an injury perspective. Moreover, only bilateral arm swing suppression has been studied with respect to running. The purpose of this study was to determine the influence of running with one arm restrained on lower extremity mechanics associated with running or sport-related injury. Fifteen healthy participants ran at a self-selected speed with typical arm swing, with one arm restrained and with both arms restrained. Lower extremity kinematics and spatiotemporal measures were analysed for all arm swing conditions. Running with one arm restrained resulted in increased frontal plane knee and hip angles, decreased foot strike angle, and decreased centre of mass vertical displacement compared to typical arm swing or bilateral arm swing restriction. Stride length was decreased and step frequency increased when running with one or both arms restrained. Unilateral arm swing restriction induces changes in lower extremity kinematics that are not similar to running with bilateral arm swing restriction or typical arm swing motion. Running with one arm restrained increases frontal plane mechanics associated with risk of knee injury.

  19. Effect of metrology time delay on overlay APC

    NASA Astrophysics Data System (ADS)

    Carlson, Alan; DiBiase, Debra

    2002-07-01

    The run-to-run control strategy of lithography APC is primarily composed of a feedback loop as shown in the diagram below. It is known that the insertion of a time delay in a feedback loop can cause degradation in control performance and could even cause a stable system to become unstable, if the time delay becomes sufficiently large. Many proponents of integrated metrology methods have cited the damage caused by metrology time delays as the primary justification for moving from a stand-alone to integrated metrology. While there is little dispute over the qualitative form of this argument, there has been very light published about the quantitative effects under real fab conditions - precisely how much control is lost due to these time delays. Another issue regarding time delays is that the length of these delays is not typically fixed - they vary from lot to lot and in some cases this variance can be large - from one hour on the short side to over 32 hours on the long side. Concern has been expressed that the variability in metrology time delays can cause undesirable dynamics in feedback loops that make it difficult to optimize feedback filters and gains and at worst could drive a system unstable. By using data from numerous fabs, spanning many sizes and styles of operation, we have conducted a quantitative study of the time delay effect on overlay run- to-run control. Our analysis resulted in the following conclusions: (1) There is a significant and material relationship between metrology time delay and overlay control under a variety of real world production conditions. (2) The run-to-run controller can be configured to minimize sensitivity to time delay variations. (3) The value of moving to integrated metrology can be quantified.

  20. Operating system for a real-time multiprocessor propulsion system simulator

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  1. Ultramarathon runners: nature or nurture?

    PubMed

    Knechtle, Beat

    2012-12-01

    Ultramarathon running is increasingly popular. An ultramarathon is defined as a running event involving distances longer than the length of a traditional marathon of 42.195 km. In ultramarathon races, ~80% of the finishers are men. Ultramarathoners are typically ~45 y old and achieve their fastest running times between 30 and 49 y for men, and between 30 and 54 y for women. Most probably, ultrarunners start with a marathon before competing in an ultramarathon. In ultramarathoners, the number of previously completed marathons is significantly higher than the number of completed marathons in marathoners. However, recreational marathoners have a faster personal-best marathon time than ultramarathoners. Successful ultramarathoners have 7.6 ± 6.3 y of experience in ultrarunning. Ultramarathoners complete more running kilometers in training than marathoners do, but they run more slowly during training than marathoners. To summarize, ultramarathoners are master runners, have a broad experience in running, and prepare differently for an ultramarathon than marathoners do. However, it is not known what motivates male ultramarathoners and where ultramarathoners mainly originate. Future studies need to investigate the motivation of male ultramarathoners, where the best ultramarathoners originate, and whether they prepare by competing in marathons before entering ultramarathons.

  2. Short-term changes in running mechanics and foot strike pattern after introduction to minimalistic footwear.

    PubMed

    Willson, John D; Bjorhus, Jordan S; Williams, D S Blaise; Butler, Robert J; Porcari, John P; Kernozek, Thomas W

    2014-01-01

    Minimalistic footwear has garnered widespread interest in the running community, based largely on the premise that the footwear may reduce certain running-related injury risk factors through adaptations in running mechanics and foot strike pattern. To examine short-term adaptations in running mechanics among runners who typically run in conventional cushioned heel running shoes as they transition to minimalistic footwear. A 2-week, prospective, observational study. A movement science laboratory. Nineteen female runners with a rear foot strike (RFS) pattern who usually train in conventional running shoes. The participants trained for 20 minutes, 3 times per week for 2 weeks by using minimalistic footwear. Three-dimensional lower extremity running mechanics were analyzed before and after this 2-week period. Hip, knee, and ankle joint kinematics at initial contact; step length; stance time; peak ankle joint moment and joint work; impact peak; vertical ground reaction force loading rate; and foot strike pattern preference were evaluated before and after the intervention. The knee flexion angle at initial contact increased 3.8° (P < .01), but the ankle and hip flexion angles at initial contact did not change after training. No changes in ankle joint kinetics or running temporospatial parameters were observed. The majority of participants (71%), before the intervention, demonstrated an RFS pattern while running in minimalistic footwear. The proportion of runners with an RFS pattern did not decrease after 2 weeks (P = .25). Those runners who chose an RFS pattern in minimalistic shoes experienced a vertical loading rate that was 3 times greater than those who chose to run with a non-RFS pattern. Few systematic changes in running mechanics were observed among participants after 2 weeks of training in minimalistic footwear. The majority of the participants continued to use an RFS pattern after training in minimalistic footwear, and these participants experienced higher vertical loading rates. Continued exposure to these greater loading rates may have detrimental effects over time. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  3. An empirical study of race times in recreational endurance runners.

    PubMed

    Vickers, Andrew J; Vertosick, Emily A

    2016-01-01

    Studies of endurance running have typically involved elite athletes, small sample sizes and measures that require special expertise or equipment. We examined factors associated with race performance and explored methods for race time prediction using information routinely available to a recreational runner. An Internet survey was used to collect data from recreational endurance runners (N = 2303). The cohort was split 2:1 into a training set and validation set to create models to predict race time. Sex, age, BMI and race training were associated with mean race velocity for all race distances. The difference in velocity between males and females decreased with increasing distance. Tempo runs were more strongly associated with velocity for shorter distances, while typical weekly training mileage and interval training had similar associations with velocity for all race distances. The commonly used Riegel formula for race time prediction was well-calibrated for races up to a half-marathon, but dramatically underestimated marathon time, giving times at least 10 min too fast for half of runners. We built two models to predict marathon time. The mean squared error for Riegel was 381 compared to 228 (model based on one prior race) and 208 (model based on two prior races). Our findings can be used to inform race training and to provide more accurate race time predictions for better pacing.

  4. Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application

    NASA Astrophysics Data System (ADS)

    Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.

    2011-12-01

    EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.

  5. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  6. Support for Online Calibration in the ALICE HLT Framework

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.

  7. Three-Dimensional Near Infrared Imaging of Pathophysiological Changes Within the Breast

    DTIC Science & Technology

    2008-03-01

    StO2: Oxygenation Saturatin (in %); H20: Waiter content (in %); a: Scattering Amplitude; b: Scattering Power Typically in these cases of noisy...estimated from Fig. 2(a) for the NN/NM ratio involved. The deviation in run-time that occurs in practice is likely due to the cost of memory management

  8. Intervention for Young Children Displaying Coordination Disorders

    ERIC Educational Resources Information Center

    Chambers, Mary E.; Sugden, David A.

    2016-01-01

    The years from 3 to 6 are a time when children develop fundamental movement skills that are the building blocks for the functional movements they use throughout their lives. By 6 years of age, a typically developing child will have in place a full range of movement skills, including, running, jumping, hopping, skipping, climbing, throwing,…

  9. Punchets: nonlinear transport in Hamiltonian pump-ratchet hybrids

    NASA Astrophysics Data System (ADS)

    Dittrich, Thomas; Medina Sánchez, Nicolás

    2018-02-01

    ‘Punchets’ are hybrids between ratchets and pumps, combining a spatially periodic static potential, typically asymmetric under space inversion, with a local driving that breaks time-reversal invariance, and are intended to model metal or semiconductor surfaces irradiated by a collimated laser beam. Their crucial feature is irregular driven scattering between asymptotic regions supporting periodic (as opposed to free) motion. With all binary spatio-temporal symmetries broken, scattering in punchets typically generates directed currents. We here study the underlying nonlinear transport mechanisms, from chaotic scattering to the parameter dependence of the currents, in three types of Hamiltonian models, (i) with spatially periodic potentials where only in the driven scattering region, spatial and temporal symmetries are broken, and (ii), spatially asymmetric (ratchet) potentials with a driving that only breaks time-reversal invariance. As more realistic models of laser-irradiated surfaces, we consider (iii), a driving in the form of a running wave confined to a compact region by a static envelope. In this case, the induced current can even run against the direction of wave propagation, drastically evidencing its nonlinear nature. Quantizing punchets is indicated as a viable research perspective.

  10. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  11. Running SINDA '85/FLUINT interactive on the VAX

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris

    1992-01-01

    Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.

  12. On the use of tower-flux measurements to assess the performance of global ecosystem models

    NASA Astrophysics Data System (ADS)

    El Maayar, M.; Kucharik, C.

    2003-04-01

    Global ecosystem models are important tools for the study of biospheric processes and their responses to environmental changes. Such models typically translate knowledge, gained from local observations, into estimates of regional or even global outcomes of ecosystem processes. A typical test of ecosystem models consists of comparing their output against tower-flux measurements of land surface-atmosphere exchange of heat and mass. To perform such tests, models are typically run using detailed information on soil properties (texture, carbon content,...) and vegetation structure observed at the experimental site (e.g., vegetation height, vegetation phenology, leaf photosynthetic characteristics,...). In global simulations, however, earth's vegetation is typically represented by a limited number of plant functional types (PFT; group of plant species that have similar physiological and ecological characteristics). For each PFT (e.g., temperate broadleaf trees, boreal conifer evergreen trees,...), which can cover a very large area, a set of typical physiological and physical parameters are assigned. Thus, a legitimate question arises: How does the performance of a global ecosystem model run using detailed site-specific parameters compare with the performance of a less detailed global version where generic parameters are attributed to a group of vegetation species forming a PFT? To answer this question, we used a multiyear dataset, measured at two forest sites with contrasting environments, to compare seasonal and interannual variability of surface-atmosphere exchange of water and carbon predicted by the Integrated BIosphere Simulator-Dynamic Global Vegetation Model. Two types of simulations were, thus, performed: a) Detailed runs: observed vegetation characteristics (leaf area index, vegetation height,...) and soil carbon content, in addition to climate and soil type, are specified for model run; and b) Generic runs: when only observed climates and soil types at the measurement sites are used to run the model. The generic runs were performed for the number of years equal to the current age of the forests, initialized with no vegetation and a soil carbon density equal to zero.

  13. Music Therapy Engages Children with Autism in Outdoor Play. FPG Snapshot. Number 39, February 2007

    ERIC Educational Resources Information Center

    FPG Child Development Institute, 2007

    2007-01-01

    The unstructured space, running, climbing, sliding, and loud nature of playground time can be overwhelming for children with autism who thrive on predictable and structured routines. As a result, these preschoolers often do not experience the learning and social development benefits from outdoor play seen in their typically developing classmates.…

  14. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  15. Living Color Frame System: PC graphics tool for data visualization

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1993-01-01

    Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.

  16. Analysis of Operational Data: A Proof of Concept for Assessing Electrical Infrastructure Impact

    DTIC Science & Technology

    2015-11-01

    cogeneration, solar, wind , geothermal, etc.) or by prime mover (i.e., steam turbine , water turbine , gas turbine , etc.). Power plants are typically...and Time SDR Sensor Data Record TRADOC U.S. Army Training and Doctrine Command UTC Coordinated Universal Time VCM VIIRS Cloud Mask VIIRS Visible...power, and other natural sources (water or wind ). The generating facilities or power plants can run by fuel (e.g., fossil fuel, hydroelectric, nuclear

  17. Investigation on the Practicality of Developing Reduced Thermal Models

    NASA Technical Reports Server (NTRS)

    Lombardi, Giancarlo; Yang, Kan

    2015-01-01

    Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.

  18. Initial foot contact and related kinematics affect impact loading rate in running.

    PubMed

    Breine, Bastiaan; Malcolm, Philippe; Van Caekenberghe, Ine; Fiers, Pieter; Frederick, Edward C; De Clercq, Dirk

    2017-08-01

    This study assessed kinematic differences between different foot strike patterns and their relationship with peak vertical instantaneous loading rate (VILR) of the ground reaction force (GRF). Fifty-two runners ran at 3.2 m · s -1 while we recorded GRF and lower limb kinematics and determined foot strike pattern: Typical or Atypical rearfoot strike (RFS), midfoot strike (MFS) of forefoot strike (FFS). Typical RFS had longer contact times and a lower leg stiffness than Atypical RFS and MFS. Typical RFS showed a dorsiflexed ankle (7.2 ± 3.5°) and positive foot angle (20.4 ± 4.8°) at initial contact while MFS showed a plantar flexed ankle (-10.4 ± 6.3°) and more horizontal foot (1.6 ± 3.1°). Atypical RFS showed a plantar flexed ankle (-3.1 ± 4.4°) and a small foot angle (7.0 ± 5.1°) at initial contact and had the highest VILR. For the RFS (Typical and Atypical RFS), foot angle at initial contact showed the highest correlation with VILR (r = -0.68). The observed higher VILR in Atypical RFS could be related to both ankle and foot kinematics and global running style that indicate a limited use of known kinematic impact absorbing "strategies" such as initial ankle dorsiflexion in MFS or initial ankle plantar flexion in Typical RFS.

  19. Factors affecting running economy in trained distance runners.

    PubMed

    Saunders, Philo U; Pyne, David B; Telford, Richard D; Hawley, John A

    2004-01-01

    Running economy (RE) is typically defined as the energy demand for a given velocity of submaximal running, and is determined by measuring the steady-state consumption of oxygen (VO2) and the respiratory exchange ratio. Taking body mass (BM) into consideration, runners with good RE use less energy and therefore less oxygen than runners with poor RE at the same velocity. There is a strong association between RE and distance running performance, with RE being a better predictor of performance than maximal oxygen uptake (VO2max) in elite runners who have a similar VO2max). RE is traditionally measured by running on a treadmill in standard laboratory conditions, and, although this is not the same as overground running, it gives a good indication of how economical a runner is and how RE changes over time. In order to determine whether changes in RE are real or not, careful standardisation of footwear, time of test and nutritional status are required to limit typical error of measurement. Under controlled conditions, RE is a stable test capable of detecting relatively small changes elicited by training or other interventions. When tracking RE between or within groups it is important to account for BM. As VO2 during submaximal exercise does not, in general, increase linearly with BM, reporting RE with respect to the 0.75 power of BM has been recommended. A number of physiological and biomechanical factors appear to influence RE in highly trained or elite runners. These include metabolic adaptations within the muscle such as increased mitochondria and oxidative enzymes, the ability of the muscles to store and release elastic energy by increasing the stiffness of the muscles, and more efficient mechanics leading to less energy wasted on braking forces and excessive vertical oscillation. Interventions to improve RE are constantly sought after by athletes, coaches and sport scientists. Two interventions that have received recent widespread attention are strength training and altitude training. Strength training allows the muscles to utilise more elastic energy and reduce the amount of energy wasted in braking forces. Altitude exposure enhances discrete metabolic aspects of skeletal muscle, which facilitate more efficient use of oxygen. The importance of RE to successful distance running is well established, and future research should focus on identifying methods to improve RE. Interventions that are easily incorporated into an athlete's training are desirable.

  20. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  1. Mira: Argonne's 10-petaflops supercomputer

    ScienceCinema

    Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul

    2018-02-13

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  2. Validation of the 1/12 degrees Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  3. Validation of the 1/12 deg Arctic Cap Nowcast/Forecast System (ACNFS)

    DTIC Science & Technology

    2010-11-04

    IBM Power 6 ( Davinci ) at NAVOCEANO with a 2 hr time step for the ice model and a 30 min time step for the ocean model. All model boundaries are...run using 320 processors on the Navy DSRC IBM Power 6 ( Davinci ) at NAVOCEANO. A typical one-day hindcast takes approximately 1.0 wall clock hour...meter. As more observations become available, further studies of ice draft will be used as a validation tool . The IABP program archived 102 Argos

  4. Mira: Argonne's 10-petaflops supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, Michael; Coghlan, Susan; Isaacs, Eric

    2013-07-03

    Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.

  5. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  6. Fundamental movement skills and physical activity among children with and without cerebral palsy.

    PubMed

    Capio, Catherine M; Sit, Cindy H P; Abernethy, Bruce; Masters, Rich S W

    2012-01-01

    Fundamental movement skills (FMS) proficiency is believed to influence children's physical activity (PA), with those more proficient tending to be more active. Children with cerebral palsy (CP), who represent the largest diagnostic group treated in pediatric rehabilitation, have been found to be less active than typically developing children. This study examined the association of FMS proficiency with PA in a group of children with CP, and compared the data with a group of typically developing children. Five FMS (run, jump, kick, throw, catch) were tested using process- and product-oriented measures, and accelerometers were used to monitor PA over a 7-day period. The results showed that children with CP spent less time in moderate to vigorous physical activity (MVPA), but more time in sedentary behavior than typically developing children. FMS proficiency was negatively associated with sedentary time and positively associated with time spent in MVPA in both groups of children. Process-oriented FMS measures (movement patterns) were found to have a stronger influence on PA in children with CP than in typically developing children. The findings provide evidence that FMS proficiency facilitates activity accrual among children with CP, suggesting that rehabilitation and physical education programs that support FMS development may contribute to PA-related health benefits. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  8. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  9. Wave run-up on a high-energy dissipative beach

    USGS Publications Warehouse

    Ruggiero, P.; Holman, R.A.; Beach, R.A.

    2004-01-01

    Because of highly dissipative conditions and strong alongshore gradients in foreshore beach morphology, wave run-up data collected along the central Oregon coast during February 1996 stand in contrast to run-up data currently available in the literature. During a single data run lasting approximately 90 min, the significant vertical run-up elevation varied by a factor of 2 along the 1.6 km study site, ranging from 26 to 61% of the offshore significant wave height, and was found to be linearly dependent on the local foreshore beach slope that varied by a factor of 5. Run-up motions on this high-energy dissipative beach were dominated by infragravity (low frequency) energy with peak periods of approximately 230 s. Incident band energy levels were 2.5 to 3 orders of magnitude lower than the low-frequency spectral peaks and typically 96% of the run-up variance was in the infragravity band. A broad region of the run-up spectra exhibited an f-4 roll off, typical of saturation, extending to frequencies lower than observed in previous studies. The run-up spectra were dependent on beach slope with spectra for steeper foreshore slopes shifted toward higher frequencies than spectra for shallower foreshore slopes. At infragravity frequencies, run-up motions were coherent over alongshore length scales in excess of 1 km, significantly greater than decorrelation length scales on moderate to reflective beaches. Copyright 2004 by the American Geophysical Union.

  10. Determinants of Household Water Conservation Retrofit Activity: A Discrete Choice Model Using Survey Data

    NASA Astrophysics Data System (ADS)

    Cameron, T. A.; Wright, M. B.

    1990-02-01

    Economic analyses of residential water demand have typically concentrated on price and income elasticities. In the short run a substantial change in water prices might induce only small changes in consumption levels. As time passes, however, households will have the opportunity to "retrofit" existing water-using equipment to make it less water-intensive. This produces medium- to long-run demand elasticities that are higher than short-run studies suggest. We examine responses to water conservation questions appearing on the Los Angeles Department of Water and Power's 1983 residential energy survey. We find that households' decisions to install shower retrofit devices are influenced by the potential to save money on water heating bills. We attribute toilet retrofit decisions more to noneconomic factors which might be characterized as "general conservation mindedness." The endogeneity of these retrofit decisions casts some doubt on the results of studies of individual households that treat voluntary retrofits as exogenous.

  11. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  12. Runtime visualization of the human arterial tree.

    PubMed

    Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T

    2007-01-01

    Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.

  13. Pulmonary function in children with development coordination disorder.

    PubMed

    Wu, Sheng K; Cairney, John; Lin, Hsiao-Hui; Li, Yao-Chuen; Song, Tai-Fen

    2011-01-01

    The purpose of this study was to compare pulmonary function in children with developmental coordination disorder (DCD) with children who are typically developing (TD), and also analyze possible gender differences in pulmonary function between these groups. The Movement ABC test was used to identify the movement coordination ability of children. Two hundred and fifty participants (90 children with DCD and 160 TD children) aged 9-10 years old completed this study. Using the KoKo spirometry, forced vital capacity (FVC) and forced expiratory volume in 1s (FEV(1.0)) were used to measure pulmonary function. The 800-m run was also conducted to assess cardiopulmonary fitness of children in the field. There was a significant difference in pulmonary function between TD children and those with DCD. The values of FVC and FEV(1.0) in TD children were significantly higher than in children with DCD. A significant, but low correlation (r = -0.220, p < .001) was found between total score on the MABC and FVC; similarly, a positive but low correlation (r = 0.252, p < .001) was found between total score on the MABC and the completion time of 800-m run. However, no significant correlation between FVC and the time of 800-m run was found (p > .05). Significant correlations between total score on the MABC and the completion time of the 800-m run (r = 0.352, p < .05) and between FVC and the time of 800-m run (r = -0.285, p < .05) were observed in girls with DCD but not boys with this condition. Based on the results of this study, pulmonary function in children with DCD was significantly lower than that of TD children. The field test, 800-m run, may not be a good indicator to distinguish aerobic ability between children with DCD and those who are TD. It is possible that poor pulmonary function in children with DCD is due to reduced physical activity in this population. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Adolescent runners: the effect of training shoes on running kinematics.

    PubMed

    Mullen, Scott; Toby, E Bruce

    2013-06-01

    The modern running shoe typically features a large cushioned heel intended to dissipate the energy at heel strike to the knees and hips. The purpose of this study was to evaluate the effect that shoes have upon the running biomechanics among competitive adolescent runners. We wish to answer the question of whether running style is altered in these athletes because of footwear. Twelve competitive adolescent athletes were recruited from local track teams. Each ran on a treadmill in large heel trainers, track flats, and barefoot. Four different speeds were used to test each athlete. The biomechanics were assessed with a motion capture system. Stride length, heel height during posterior swing phase, and foot/ground contact were recorded. Shoe type markedly altered the running biomechanics. The foot/ground contact point showed differences in terms of footwear (P<0.0001) and speed (P=0.000215). When wearing trainers, the athletes landed on their heels 69.79% of the time at all speeds (P<0.001). The heel was the first point of contact <35% of the time in the flat condition and <30% in the barefoot condition. Running biomechanics are significantly altered by shoe type in competitive adolescents. Heavily heeled cushioned trainers promote a heel strike pattern, whereas track flats and barefoot promote a forefoot or midfoot strike pattern. Training in heavily cushioned trainers by the competitive runner has not been clearly shown to be detrimental to performance, but it does change the gait pattern. It is not known whether the altered biomechanics of the heavily heeled cushioned trainer may be detrimental to the adolescent runner who is still developing a running style.

  15. Modelling Agent-Environment Interaction in Multi-Agent Simulations with Affordances

    DTIC Science & Technology

    2010-04-01

    allow operations analysts to conduct statistical studies comparing the effectiveness of different systems or tactics in different scenarios. 11 Instead of...in a Monte-Carlo batch mode, producing statistical outcomes for particular measures of effectiveness. They typically also run at many times faster...Combined with annotated signs, the affordances allowed the traveller agents to find their way around the virtual airport and to conduct their business

  16. NearFar: A computer program for nearside farside decomposition of heavy-ion elastic scattering amplitude

    NASA Astrophysics Data System (ADS)

    Cha, Moon Hoe

    2007-02-01

    The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.

  17. Pre-game perceived wellness highly associates with match running performances during an international field hockey tournament.

    PubMed

    Ihsan, Mohammed; Tan, Frankie; Sahrom, Sofyan; Choo, Hui Cheng; Chia, Michael; Aziz, Abdul Rashid

    2017-06-01

    This study examined the associations between pre-game wellness and changes in match running performance normalised to either (i) playing time, (ii) post-match RPE or (iii) both playing time and post-match RPE, over the course of a field hockey tournament. Twelve male hockey players were equipped with global positioning system (GPS) units while competing in an international tournament (six matches over 9 days). The following GPS-derived variables, total distance (TD), low-intensity activity (LIA; <15 km/h), high-intensity running (HIR; >15 km/h), high-intensity accelerations (HIACC; >2 m/s 2 ) and decelerations (HIDEC; >-2 m/s 2 ) were acquired and normalised to either (i) playing time, (ii) post-match RPE or (iii) both playing time and post-match RPE. Each morning, players completed ratings on a 0-10 scale for four variables: fatigue, muscle soreness, mood state and sleep quality, with cumulative scores determined as wellness. Associations between match performances and wellness were analysed using Pearson's correlation coefficient. Combined time and RPE normalisation demonstrated the largest associations with Δwellness compared with time or RPE alone for most variables; TD (r = -0.95; -1.00 to -0.82, p = .004), HIR (r = -0.95; -1.00 to -0.83, p = .003), LIA (r = -0.94; -1.00 to -0.81, p = .026), HIACC (r = -0.87; -1.00 to -0.66, p = .004) and HIDEC (r = -0.90; -0.99 to -0.74, p = .008). These findings support the use of wellness measures as a pre-match tool to assist with managing internal load over the course of a field hockey tournament. Highlights Fixtures during international field hockey tournaments are typically congested and impose high physiological demands on an athlete. To minimise decrements in running performance over the course of a tournament, measures to identify players who have sustained high internal loads are logically warranted. The present study examined the association between changes in simple customised psychometric wellness measures, on changes in match running performance normalised to (i) playing time, (ii) post-match RPE and (iii) playing time and post-match RPE, over the course of a field hockey tournament. Changes in match running performance were better associated to changes in wellness (r = -0.87 to -0.95), when running performances were normalised to both time and RPE compared with time or RPE alone. The present findings support the use of wellness measures as a pre-match tool to assist with managing internal load over the course of a field hockey tournament. Improved associations between wellness scores and match running performances were evident, when running variables were normalised to both playing time and post-match RPE.

  18. MAGNA (Materially and Geometrically Nonlinear Analysis). Part I. Finite Element Analysis Manual.

    DTIC Science & Technology

    1982-12-01

    provided for operating the program, modifying storage caoacity, preparing input data, estimating computer run times , and interpreting the output...7.1.3 Reserved File Names 7.1.16 7.1.4 Typical Execution Times on CDC Computers 7.1.18 7.2 CRAY PROGRAM VERSION 7.2.1 7.2.1 Job Control Language 7.2.1...7.2.2 Modification of Storage Capacity 7.2.8 7.2.3 Execution Times on the CRAY-I Computer 7.2.12 7.3 VAX PROGRAM VERSION 7.3.1 8 INPUT DATA 8.0.1 8.1

  19. Aging in the three-dimensional random-field Ising model

    NASA Astrophysics Data System (ADS)

    von Ohr, Sebastian; Manssen, Markus; Hartmann, Alexander K.

    2017-07-01

    We studied the nonequilibrium aging behavior of the random-field Ising model in three dimensions for various values of the disorder strength. This allowed us to investigate how the aging behavior changes across the ferromagnetic-paramagnetic phase transition. We investigated a large system size of N =2563 spins and up to 108 Monte Carlo sweeps. To reach these necessary long simulation times, we employed an implementation running on Intel Xeon Phi coprocessors, reaching single-spin-flip times as short as 6 ps. We measured typical correlation functions in space and time to extract a growing length scale and corresponding exponents.

  20. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  1. Modeling a maintenance simulation of the geosynchronous platform

    NASA Technical Reports Server (NTRS)

    Kleiner, A. F., Jr.

    1980-01-01

    A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.

  2. Time warp operating system version 2.7 internals manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.

  3. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  4. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  5. BaHaMAS A Bash Handler to Monitor and Administrate Simulations

    NASA Astrophysics Data System (ADS)

    Sciarra, Alessandro

    2018-03-01

    Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.

  6. 50 GFlops molecular dynamics on the Connection Machine 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomdahl, P.S.; Tamayo, P.; Groenbech-Jensen, N.

    1993-12-31

    The authors present timings and performance numbers for a new short range three dimensional (3D) molecular dynamics (MD) code, SPaSM, on the Connection Machine-5 (CM-5). They demonstrate that runs with more than 10{sup 8} particles are now possible on massively parallel MIMD computers. To the best of their knowledge this is at least an order of magnitude more particles than what has previously been reported. Typical production runs show sustained performance (including communication) in the range of 47--50 GFlops on a 1024 node CM-5 with vector units (VUs). The speed of the code scales linearly with the number of processorsmore » and with the number of particles and shows 95% parallel efficiency in the speedup.« less

  7. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.

    PubMed

    Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal

    2011-08-18

    RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.

  8. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  9. Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach

    PubMed Central

    2011-01-01

    Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589

  10. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  11. Compliance Testing of Phosphoric Acid Anodizing Line Wet Scrubber, Metal Bonding Facility, Building 375, Kelly AFB, Texas

    DTIC Science & Technology

    1989-06-01

    boilers and incinerators). Generally the chromium emissions from the processes are particu- late in nature. The trivalent chromium is converted to...runs at five different boiler and incinerator sources, typically less than 3 percent of the trivalent chromium converts to hexavalent chromium ...Emissions from this process contain 20 to 100 times more trivalent chromium than hexavalent chromium in the sample. In separating the hexavalent chromium

  12. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  13. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  14. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  15. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  16. 77 FR 58255 - Takes of Marine Mammals Incidental to Specified Activities; Marine Geophysical Survey off the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ... straight portions of the track lines as well as the initial portions of the run-out (offshore) sections and later portions of the run-in (inshore) sections. During turns and most of the initial portion of the run... propeller has four blades and the shaft typically rotates at 750 revolutions per minute. The vessel also has...

  17. Pattern of shoreline spawning by sockeye salmon in a glacially turbid lake: evidence for subpopulation differentiation

    USGS Publications Warehouse

    Burger, C.V.; Finn, J.E.; Holland-Bartels, L.

    1995-01-01

    Alaskan sockeye salmon typically spawn in lake tributaries during summer (early run) and along clear-water lake shorelines and outlet rivers during fall (late run). Production at the glacially turbid Tustumena Lake and its outlet, the Kasilof River (south-central Alaska), was thought to be limited to a single run of sockeye salmon that spawned in the lake's clear-water tributaries. However, up to 40% of the returning sockeye salmon enumerated by sonar as they entered the lake could not be accounted for during lake tributary surveys, which suggested either substantial counting errors or that a large number of fish spawned in the lake itself. Lake shoreline spawning had not been documented in a glacially turbid system. We determined the distribution and pattern of sockeye salmon spawning in the Tustumena Lake system from 1989 to 1991 based on fish collected and radiotagged in the Kasilof River. Spawning areas and time were determined for 324 of 413 sockeye salmon tracked upstream into the lake after release. Of these, 224 fish spawned in tributaries by mid-August and 100 spawned along shoreline areas of the lake during late August. In an additional effort, a distinct late run was discovered that spawned in the Kasilof River at the end of September. Between tributary and shoreline spawners, run and spawning time distributions were significantly different. The number of shoreline spawners was relatively stable and independent of annual escapement levels during the study, which suggests that the shoreline spawning component is distinct and not surplus production from an undifferentiated run. Since Tustumena Lake has been fully deglaciated for only about 2,000 years and is still significantly influenced by glacier meltwater, this diversification of spawning populations is probably a relatively recent and ongoing event.

  18. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    NASA Astrophysics Data System (ADS)

    Muruganandam, P.; Adhikari, S. K.

    2009-10-01

    Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 122 907 No. of bytes in distributed program, including test data, etc.: 609 662 Distribution format: tar.gz Programming language: FORTRAN 77 and Fortran 90/95 Computer: PC Operating system: Linux, Unix RAM: 1 GByte (i, iv, v), 2 GByte (ii, vi, vii, x, xi), 4 GByte (iii, viii, xii), 8 GByte (ix) Classification: 2.9, 4.3, 4.12 Nature of problem: These programs are designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-, two- or three-space dimensions with a harmonic, circularly-symmetric, spherically-symmetric, axially-symmetric or anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Solution method: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields the solution of stationary and/or non-stationary problems. Additional comments: This package consists of 12 programs, see "Program title", above. FORTRAN77 versions are provided for each of the 12 and, in addition, Fortran 90/95 versions are included for ii, iii, vi, viii, ix, xii. For the particular purpose of each program please see the below. Running time: Minutes on a medium PC (i, iv, v, vii, x, xi), a few hours on a medium PC (ii, vi, viii, xii), days on a medium PC (iii, ix). Program summary (1)Title of program: imagtime1d.F Title of electronic file: imagtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (2)Title of program: imagtimecir.F Title of electronic file: imagtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (3)Title of program: imagtimesph.F Title of electronic file: imagtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (4)Title of program: realtime1d.F Title of electronic file: realtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (5)Title of program: realtimecir.F Title of electronic file: realtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (6)Title of program: realtimesph.F Title of electronic file: realtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (7)Title of programs: imagtimeaxial.F and imagtimeaxial.f90 Title of electronic file: imagtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (8)Title of program: imagtime2d.F and imagtime2d.f90 Title of electronic file: imagtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (9)Title of program: realtimeaxial.F and realtimeaxial.f90 Title of electronic file: realtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (10)Title of program: realtime2d.F and realtime2d.f90 Title of electronic file: realtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (11)Title of program: imagtime3d.F and imagtime3d.f90 Title of electronic file: imagtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (12)Title of program: realtime3d.F and realtime3d.f90 Title of electronic file: realtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum Ram Memory: 8 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems.

  19. Techniques for establishing schedules with wheel running as reinforcement in rats.

    PubMed

    Iversen, I H

    1993-07-01

    In three experiments, access to wheel running was contingent on lever pressing. In each experiment, the duration of access to running was reduced gradually to 4, 5, or 6 s, and the schedule parameters were expanded gradually. The sessions lasted 2 hr. In Experiment 1, a fixed-ratio 20 schedule controlled a typical break-and-run pattern of lever pressing that was maintained throughout the session for 3 rats. In Experiment 2, a fixed-interval schedule of 6 min maintained lever pressing throughout the session for 3 rats, and for 1 rat, the rate of lever pressing was positively accelerated between reinforcements. In Experiment 3, a variable-ratio schedule of 20 or 35 was in effect and maintained lever pressing at a very stable pace throughout the session for 2 of 3 rats; for 1 rat, lever pressing was maintained at an irregular rate. When the session duration was extended to successive 24-hr periods, with food and water accessible in Experiment 3, lever pressing settled into a periodic pattern occurring at a high rate at approximately the same time each day. In each experiment, the rats that developed the highest local rates of running during wheel access also maintained the most stable and highest rates of lever pressing.

  20. The Effect of Training in Minimalist Running Shoes on Running Economy

    PubMed Central

    Ridge, Sarah T.; Standifird, Tyler; Rivera, Jessica; Johnson, A. Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-01-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key points Running in minimalist footwear did not result in a change in running economy compared to running in traditional footwear prior to 10 weeks of training. Both groups (control and experimental) showed an improvement in running economy in both types of shoes after 10 weeks of training. After transitioning to minimalist running shoes, running economy was not significantly different while running in traditional or minimalist footwear. PMID:26336352

  1. The Effect of Training in Minimalist Running Shoes on Running Economy.

    PubMed

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear prior to 10 weeks of training.Both groups (control and experimental) showed an improvement in running economy in both types of shoes after 10 weeks of training.After transitioning to minimalist running shoes, running economy was not significantly different while running in traditional or minimalist footwear.

  2. Preliminary mixed-layer model results for FIRE marine stratocumulus IFO conditions

    NASA Technical Reports Server (NTRS)

    Barlow, R.; Nicholls, S.

    1990-01-01

    Some preliminary results from the Turton and Nicholls mixed layer model using typical FIRE boundary conditions are presented. The model includes entrainment and drizzle parametrizations as well as interactive long and shortwave radiation schemes. A constraint on the integrated turbulent kinetic energy balance ensures that the model remains energetically consistent at all times. The preliminary runs were used to identify the potentially important terms in the heat and moisture budgets of the cloud layer, and to assess the anticipated diurnal variability. These are compared with typical observations from the C130. Sensitivity studies also revealed the remarkable stability of these cloud sheets: a number of negative feedback mechanisms appear to operate to maintain the cloud over an extended time period. These are also discussed. The degree to which such a modelling approach can be used to explain observed features, the specification of boundary conditions and problems of interpretation in non-horizontally uniform conditions is also raised.

  3. High-performance hardware implementation of a parallel database search engine for real-time peptide mass fingerprinting

    PubMed Central

    Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel

    2008-01-01

    Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553

  4. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  5. A sustainable genetic algorithm for satellite resource allocation

    NASA Technical Reports Server (NTRS)

    Abbott, R. J.; Campbell, M. L.; Krenz, W. C.

    1995-01-01

    A hybrid genetic algorithm is used to schedule tasks for 8 satellites, which can be modelled as a robot whose task is to retrieve objects from a two dimensional field. The objective is to find a schedule that maximizes the value of objects retrieved. Typical of the real-world tasks to which this corresponds is the scheduling of ground contacts for a communications satellite. An important feature of our application is that the amount of time available for running the scheduler is not necessarily known in advance. This requires that the scheduler produce reasonably good results after a short period but that it also continue to improve its results if allowed to run for a longer period. We satisfy this requirement by developing what we call a sustainable genetic algorithm.

  6. Swing- and support-related muscle actions differentially trigger human walk-run and run-walk transitions.

    PubMed

    Prilutsky, B I; Gregor, R J

    2001-07-01

    There has been no consistent explanation as to why humans prefer changing their gait from walking to running and from running to walking at increasing and decreasing speeds, respectively. This study examined muscle activation as a possible determinant of these gait transitions. Seven subjects walked and ran on a motor-driven treadmill for 40s at speeds of 55, 70, 85, 100, 115, 130 and 145% of the preferred transition speed. The movements of subjects were videotaped, and surface electromyographic activity was recorded from seven major leg muscles. Resultant moments at the leg joints during the swing phase were calculated. During the swing phase of locomotion at preferred running speeds (115, 130, 145%), swing-related activation of the ankle, knee and hip flexors and peaks of flexion moments were typically lower (P<0.05) during running than during walking. At preferred walking speeds (55, 70, 85%), support-related activation of the ankle and knee extensors was typically lower during stance of walking than during stance of running (P<0.05). These results support the hypothesis that the preferred walk-run transition might be triggered by the increased sense of effort due to the exaggerated swing-related activation of the tibialis anterior, rectus femoris and hamstrings; this increased activation is necessary to meet the higher joint moment demands to move the swing leg during fast walking. The preferred run-walk transition might be similarly triggered by the sense of effort due to the higher support-related activation of the soleus, gastrocnemius and vastii that must generate higher forces during slow running than during walking at the same speed.

  7. Feasibility for Application of Soil Bioengineering Techniques to Natural Wastewater Treatment Systems

    DTIC Science & Technology

    1992-12-01

    surface erosion control technique, providing shallow soil protection against the impact of heavy rains and running water. As adventitious rooting... soil protection against the impact of heavy rains and running water (Schiechtl, 1980). Figure 13 shows a typical brushmattress used as streambank

  8. ESTIMATING THE SIZE OF HISTORICAL COASTAL OREGON SALMON RUNS

    EPA Science Inventory

    Increasing the abundance of salmon in Oregon's rivers and streams is a high priority public policy objective. Salmon runs have been reduced from pre-development conditions (typically defined as prior to the 1850s), but it is unclear by how much. Considerable public and private ...

  9. High-speed GPU-based finite element simulations for NDT

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.

    2015-03-01

    The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.

  10. Synthesis of Tree-Structured Computing Systems through Use of Closures.

    DTIC Science & Technology

    1984-11-29

    best hope of 8 achieving subpolynomial running times for typical problems without a degree of inter - connection that makes physical implementation... Inter HAS v TALKS leftson (SENDS v) TALKS rightson (SENDS v) HEARS parent (USES v.parent) HEARS U.inter (USES u-value) leaf HAS li HEARS parent (USES...v.parent) U Istype TREE (i),iE[i ... n-1] SIZE n root HAS u TALKS T.root (SENDS u) HEARS leftaon(USES v.left) HEARS rightson(USES v.rght) Inter HAS u

  11. A Regional Analysis of Non-Methane Hydrocarbons And Meteorology of The Rural Southeast United States

    DTIC Science & Technology

    1996-01-01

    Zt is an ARIMA time series. This is a typical regression model , except that it allows for autocorrelation in the error term Z. In this work, an ARMA...data=folder; var residual; run; II Statistical output of 1992 regression model on 1993 ozone data ARIMA Procedure Maximum Likelihood Estimation Approx...at each of the sites, and to show the effect of synoptic meteorology on high ozone by examining NOAA daily weather maps and climatic data

  12. Simple and conditional visual discrimination with wheel running as reinforcement in rats.

    PubMed

    Iversen, I H

    1998-09-01

    Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.

  13. Compression for an effective management of telemetry data

    NASA Technical Reports Server (NTRS)

    Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.

    1993-01-01

    A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.

  14. Urban Districts Compare Notes on Operation

    ERIC Educational Resources Information Center

    Aarons, Dakarai I.

    2009-01-01

    Urban school systems are large businesses, charged with running a wide range of noninstructional functions that typically do not garner them much national notice. Now, thanks to the work of a coalition of big-city districts, their leaders are gathering data on how those operations are run, in the hope of improving their business practices. The…

  15. No, You Do Not Have to Run Today, You Get to Run

    ERIC Educational Resources Information Center

    Gilbert, Jennie A.

    2004-01-01

    Children's natural play patterns provide opportunity for fitness development. Children typically do not care about the benefits of physical activity or the physiology behind the activities they perform, but they are very interested in participating in fun activities. Often curricula focus on how to feed children values that are important to…

  16. Input Sources of Third Person Singular –s Inconsistency in Children with and without Specific Language Impairment*

    PubMed Central

    Leonard, Laurence B.; Fey, Marc E.; Deevy, Patricia; Bredin-Oja, Shelley L.

    2015-01-01

    We tested four predictions based on the assumption that optional infinitives can be attributed to properties of the input whereby children inappropriately extract nonfinite subject-verb sequences (e.g. the girl run) from larger input utterances (e.g. Does the girl run? Let’s watch the girl run). Thirty children with specific language impairment (SLI) and 30 typically developing children heard novel and familiar verbs that appeared exclusively either in utterances containing nonfinite subject-verb sequences or in simple sentences with the verb inflected for third person singular –s. Subsequent testing showed strong input effects, especially for the SLI group. The results provide support for input-based factors as significant contributors not only to the optional infinitive period in typical development, but also to the especially protracted optional infinitive period seen in SLI. PMID:25076070

  17. The repeated bout effect of typical lower body strength training sessions on sub-maximal running performance and hormonal response.

    PubMed

    Doma, Kenji; Schumann, Moritz; Sinclair, Wade H; Leicht, Anthony S; Deakin, Glen B; Häkkinen, Keijo

    2015-08-01

    This study examined the effects of two typical strength training sessions performed 1 week apart (i.e. repeated bout effect) on sub-maximal running performance and hormonal. Fourteen resistance-untrained men (age 24.0 ± 3.9 years; height 1.83 ± 0.11 m; body mass 77.4 ± 14.0 kg; VOpeak 48.1 ± 6.1 M kg(-1) min(-1)) undertook two bouts of high-intensity strength training sessions (i.e. six-repetition maximum). Creatine kinase (CK), delayed-onset muscle soreness (DOMS), counter-movement jump (CMJ) as well as concentrations of serum testosterone, cortisol and testosterone/cortisol ratio (T/C) were examined prior to and immediately post, 24 (T24) and 48 (T48) h post each strength training bout. Sub-maximal running performance was also conducted at T24 and T48 of each bout. When measures were compared between bouts at T48, the degree of elevation in CK (-58.4 ± 55.6 %) and DOMS (-31.43 ± 42.9 %) and acute reduction in CMJ measures (4.1 ± 5.4 %) were attenuated (p < 0.05) following the second bout. Cortisol was increased until T24 (p < 0.05) although there were no differences between bouts and no differences were found for testosterone and T/C ratio (p > 0.05). Sub-maximal running performance was impaired until T24, although changes were not attenuated following the second bout. The initial bout appeared to provide protection against a number of muscle damage indicators suggesting a greater need for recovery following the initial session of typical lower body resistance exercises in resistance-untrained men although sub-maximal running should be avoided following the first two sessions.

  18. Improving Running Times for the Determination of Fractional Snow-Covered Area from Landsat TM/ETM+ via Utilization of the CUDA® Programming Paradigm

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Rittger, K.; Painter, T. H.; Selkowitz, D.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    As part of a JPL-USGS collaboration to expand distribution of essential climate variables (ECV) to include on-demand fractional snow cover we describe our experience and implementation of a shift towards the use of NVIDIA's CUDA® parallel computing platform and programming model. In particular the on-demand aspect of this work involves the improvement (via faster processing and a reduction in overall running times) for determination of fractional snow-covered area (fSCA) from Landsat TM/ETM+. Our observations indicate that processing tasks associated with remote sensing including the Snow Covered Area and Grain Size Model (SCAG) when applied to MODIS or LANDSAT TM/ETM+ are computationally intensive processes. We believe the shift to the CUDA programming paradigm represents a significant improvement in the ability to more quickly assert the outcomes of such activities. We use the TMSCAG model as our subject to highlight this argument. We do this by describing how we can ingest a LANDSAT surface reflectance image (typically provided in HDF format), perform spectral mixture analysis to produce land cover fractions including snow, vegetation and rock/soil whilst greatly reducing running time for such tasks. Within the scope of this work we first document the original workflow used to assert fSCA for Landsat TM and it's primary shortcomings. We then introduce the logic and justification behind the switch to the CUDA paradigm for running single as well as batch jobs on the GPU in order to achieve parallel processing. Finally we share lessons learned from the implementation of myriad of existing algorithms to a single set of code in a single target language as well as benefits this ultimately provides scientists at the USGS.

  19. Future Power Production by LENR with Thin-Film Electrodes

    NASA Astrophysics Data System (ADS)

    Miley, George H.; Hora, Heinz; Lipson, Andrei; Luo, Nie; Shrestha, P. Joshi

    2007-03-01

    PdD cluster reaction theory was recently proposed to explain a wide range of Low energy Nuclear Reaction (LENR) experiments. If understood and optimized, cluster reactions could lead to a revolutionary new power source of nuclear energy. The route is two-fold. First, the excess heat must be obtained reproducibly and over extended run times. Second, the percentage of excess must be significantly (order of magnitude or more) higher than the 20-50% typically today. The thin film methods described here have proven to be quite reproducible, e.g. providing excess heat of 20-30% in nine consecutive runs of several weeks each. However, mechanical separation of the films occurs over long runs due to the severe mechanical stresses created.. Techniques to overcome these problems are possible using graded bonding techniques similar to that used in high temperature solid oxide fuel cells. Thus the remaining key issue is to increase the excess heat. The cluster model provides import insight into this. G. H. Miley, H. Hora, et al., 233rd Amer Chem Soc Meeting, Chicago, IL, March 25-29, 2007.

  20. Setting Standards for Medically-Based Running Analysis

    PubMed Central

    Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.

    2015-01-01

    Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394

  1. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  2. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  3. Assessing Stride Variables and Vertical Stiffness with GPS-Embedded Accelerometers: Preliminary Insights for the Monitoring of Neuromuscular Fatigue on the Field

    PubMed Central

    Buchheit, Martin; Gray, Andrew; Morin, Jean-Benoit

    2015-01-01

    The aim of the present study was to examine the ability of a GPS-imbedded accelerometer to assess stride variables and vertical stiffness (K), which are directly related to neuromuscular fatigue during field-based high-intensity runs. The ability to detect stride imbalances was also examined. A team sport player performed a series of 30-s runs on an instrumented treadmill (6 runs at 10, 17 and 24 km·h-1) with or without his right ankle taped (aimed at creating a stride imbalance), while wearing on his back a commercially-available GPS unit with an embedded 100-Hz tri-axial accelerometer. Contact (CT) and flying (FT) time, and K were computed from both treadmill and accelerometers (Athletic Data Innovations) data. The agreement between treadmill (criterion measure) and accelerometer-derived data was examined. We also compared the ability of the different systems to detect the stride imbalance. Biases were small (CT and K) and moderate (FT). The typical error of the estimate was trivial (CT), small (K) and moderate (FT), with nearly perfect (CT and K) and large (FT) correlations for treadmill vs. accelerometer. The tape induced very large increase in the right - left foot ∆ in CT, FT and K measured by the treadmill. The tape effect on CT and K ∆ measured with the accelerometers were also very large, but of lower magnitude than with the treadmill. The tape effect on accelerometer-derived ∆ FT was unclear. Present data highlight the potential of a GPS-embedded accelerometer to assess CT and K during ground running. Key points GPS-embedded tri-axial accelerometers may be used to assess contact time and vertical stiffness during ground running. These preliminary results open new perspective for the field monitoring of neuromuscular fatigue and performance in run-based sports PMID:26664264

  4. Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Januszewski, M.; Kostur, M.

    2014-09-01

    We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).

  5. Phase Noise Reduction of Laser Diode

    NASA Technical Reports Server (NTRS)

    Zhang, T. C.; Poizat, J.-Ph.; Grelu, P.; Roch, J.-F.; Grangier, P.; Marin, F.; Bramati, A.; Jost, V.; Levenson, M. D.; Giacobino, E.

    1996-01-01

    Phase noise of single mode laser diodes, either free-running or using line narrowing technique at room temperature, namely injection-locking, has been investigated. It is shown that free-running diodes exhibit very large excess phase noise, typically more than 80 dB above shot-noise at 10 MHz, which can be significantly reduced by the above-mentioned technique.

  6. Exclusive Preference Develops Less Readily on Concurrent Ratio Schedules with Wheel-Running than with Sucrose Reinforcement

    ERIC Educational Resources Information Center

    Belke, Terry W.

    2010-01-01

    Previous research suggested that allocation of responses on concurrent schedules of wheel-running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule…

  7. The SSABLE system - Automated archive, catalog, browse and distribution of satellite data in near-real time

    NASA Technical Reports Server (NTRS)

    Simpson, James J.; Harkins, Daniel N.

    1993-01-01

    Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.

  8. Application of Positron Doppler Broadening Spectroscopy to the Measurement of the Uniformity of Composite Materials

    NASA Astrophysics Data System (ADS)

    Quarles, C. A.; Sheffield, Thomas; Stacy, Scott; Yang, Chun

    2009-03-01

    The uniformity of rubber-carbon black composite materials has been investigated with positron Doppler Broadening Spectroscopy (DBS). The number of grams of carbon black (CB) mixed into one hundred grams of rubber, phr, is used to characterize a sample. A typical concentration for rubber in tires is 50 phr. The S parameter measured by DBS has been found to depend on the phr of the sample as well as the type of rubber and carbon black. The variation in carbon black concentration within a surface area of about 5 mm diameter can be measured by moving a standard Na-22 or Ge-68 positron source over an extended sample. The precision of the concentration measurement depends on the dwell time at a point on the sample. The time required to determine uniformity over an extended sample can be reduced by running with much higher counting rate than is typical in DBS and correcting for the systematic variation of S parameter with counting rate. Variation in CB concentration with mixing time at the level of about 0.5% has been observed.

  9. Real-time black carbon emission factor measurements from light duty vehicles.

    PubMed

    Forestieri, Sara D; Collier, Sonya; Kuwayama, Toshihiro; Zhang, Qi; Kleeman, Michael J; Cappa, Christopher D

    2013-11-19

    Eight light-duty gasoline low emission vehicles (LEV I) were tested on a Chassis dynamometer using the California Unified Cycle (UC) at the Haagen-Smit vehicle test facility at the California Air Resources Board in El Monte, CA during September 2011. The UC includes a cold start phase followed by a hot stabilized running phase. In addition, a light-duty gasoline LEV vehicle and ultralow emission vehicle (ULEV), and a light-duty diesel passenger vehicle and gasoline direct injection (GDI) vehicle were tested on a constant velocity driving cycle. A variety of instruments with response times ≥0.1 Hz were used to characterize how the emissions of the major particulate matter components varied for the LEVs during a typical driving cycle. This study focuses primarily on emissions of black carbon (BC). These measurements allowed for the determination of BC emission factors throughout the driving cycle, providing insights into the temporal variability of BC emission factors during different phases of a typical driving cycle.

  10. TIM, a ray-tracing program for METATOY research and its dissemination

    NASA Astrophysics Data System (ADS)

    Lambert, Dean; Hamilton, Alasdair C.; Constable, George; Snehanshu, Harsh; Talati, Sharvil; Courtial, Johannes

    2012-03-01

    TIM (The Interactive METATOY) is a ray-tracing program specifically tailored towards our research in METATOYs, which are optical components that appear to be able to create wave-optically forbidden light-ray fields. For this reason, TIM possesses features not found in other ray-tracing programs. TIM can either be used interactively or by modifying the openly available source code; in both cases, it can easily be run as an applet embedded in a web page. Here we describe the basic structure of TIM's source code and how to extend it, and we give examples of how we have used TIM in our own research. Program summaryProgram title: TIM Catalogue identifier: AEKY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 124 478 No. of bytes in distributed program, including test data, etc.: 4 120 052 Distribution format: tar.gz Programming language: Java Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6 Operating system: Any; developed under Mac OS X Version 10.6 RAM: Typically 145 MB (interactive version running under Mac OS X Version 10.6) Classification: 14, 18 External routines: JAMA [1] (source code included) Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories; can visualise geometric optic transformations; can create anaglyphs (for viewing with coloured "3D glasses") and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene.

  11. High Capacity Cathode and Carbon Nanotube-Supported Anode for Enhanced Energy Density Batteries

    DTIC Science & Technology

    2017-09-07

    energy density of typical lithium ion cells and enables twice the run time or a reduction of cell mass by 50%. This work investigated a variety of...foil for the anode) by a doctor blade on one or both sides of the foil. The composite is dried in a vacuum oven, then calendared to compress the...composite slurry was coated onto the MWCNT paper using a doctor blade . The electrode was then dried overnight in a vacuum oven at 100°C and

  12. Kaiser Permanente-Sandia National Health Care Model: Phase 1 prototype final report. Part 2 -- Domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C{sup 2}, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timingmore » calculations showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology.« less

  13. Influence of fluid dynamic conditions on enzymatic hydrolysis of lignocellulosic biomass: Effect of mass transfer rate.

    PubMed

    Wojtusik, Mateusz; Zurita, Mauricio; Villar, Juan C; Ladero, Miguel; Garcia-Ochoa, Felix

    2016-09-01

    The effect of fluid dynamic conditions on enzymatic hydrolysis of acid pretreated corn stover (PCS) has been assessed. Runs were performed in stirred tanks at several stirrer speed values, under typical conditions of temperature (50°C), pH (4.8) and solid charge (20% w/w). A complex mixture of cellulases, xylanases and mannanases was employed for PCS saccharification. At low stirring speeds (<150rpm), estimated mass transfer coefficients and rates, when compared to chemical hydrolysis rates, lead to results that clearly show low mass transfer rates, being this phenomenon the controlling step of the overall process rate. However, for stirrer speed from 300rpm upwards, the overall process rate is controlled by hydrolysis reactions. The ratio between mass transfer and overall chemical reaction rates changes with time depending on the conditions of each run. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Seasonal variation in the free-running period in two Talitrus saltator populations from Italian beaches differing in morphodynamics and human disturbance

    NASA Astrophysics Data System (ADS)

    Nardi, M.; Morgan, E.; Scapini, F.

    2003-10-01

    The sandhopper Talitrus saltator Montagu (Amphipoda) is a widespread species adapted to different changing environmental conditions and which typically shows a clear circadian rhythm of locomotor activity. The populations from two beaches on the western Italian coast differing in coastline dynamics (eroded versus dynamically stable) and human disturbance (inside a natural park versus freely used and cleaned for leisure) were studied to highlight intrapopulation variation in the endogenous locomotor rhythm. The activity of adult sandhoppers was studied under constant laboratory conditions within individual recording chambers. Variation of the free-running period was analysed at individual level within each population. Greater variability was found than previously reported for the circadian rhythm period of T. saltator, and seasonal variation was shown for the first time. Differences in the level of variation were correlated with coastline dynamics.

  15. Logging while fishing technique results in substantial savings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tollefsen, E.; Everett, M.

    1996-12-01

    During wireline logging operations, tools occasionally become stuck in the borehole and require fishing. A typical fishing job can take anywhere from 1{1/2}--4 days. In the Gulf of Mexico, a fishing job can easily cost between $100,000 and $500,000. These costs result from nonproductive time during the fishing trip, associated wiper trip and relogging the well. Logging while fishing (LWF) technology is a patented system capable of retrieving a stuck fish and completing the logging run during the same pipe descent. Completing logging operations using LWF method saves time and money. The technique also provides well information where data maymore » not otherwise have been obtained. Other benefits include reduced fishing time and an increased level of safety.« less

  16. Liver transplantation with piggyback anastomosis using a linear stapler: a case report.

    PubMed

    Akbulut, S; Wojcicki, M; Kayaalp, C; Yilmaz, S

    2013-04-01

    The so-called piggyback technique of liver transplantation (PB-LT) preserves the recipient's caval vein, shortening the warm ischemic time. It can be reduced even further by using a linear stapler for the cavocaval anastomosis. Herein, we have presented a case of a patient undergoing a side-to-side, whole-organ PB-LT for cryptogenic cirrhosis. Upper and lower orifices of the donor caval vein were closed at the back table using a running 5-0 polypropylene suture. Three stay sutures were then placed on caudal parts of both the recipient and donor caval with a 5-mm venotomies. The endoscopic linear stapler was placed upward through the orifices and fired. A second stapler was placed more cranially and fired resulting in a 8-9 cm long cavocavostomy. Some loose clips were flushed away from the caval lumen. The caval anastomosis was performed within 4 minutes; the time needed to close the caval vein stapler insertion orifices (4-0 polypropylene running suture) before reperfusion was 1 minute. All other anastomoses were performed as typically sutured. The presented technique enables one to reduce the warm ischemic time, which can be of particular importance with marginal grafts. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Changes in motor skill and fitness measures among children with high and low motor competence: a five-year longitudinal study.

    PubMed

    Hands, Beth

    2008-04-01

    Children with low motor competence (LMC) are less able to participate fully in many sports and recreational activities typically enjoyed by their well-coordinated peers. Poor fitness outcomes have been reported for these children, although previous studies have not tracked these outcomes over time. In this study, 19 children (8 girls and 11 boys) with LMC aged between 5 and 7years were matched by age and gender with 19 children with high motor competence (HMC). Six fitness (body composition and cardiovascular endurance) and motor skill (sprint run, standing broad jump and balance) measures were repeated for each group once a year for five years. For each year of the study, the LMC groups performed less well on all measures than the HMC groups. Changes over time were significantly different between groups for cardiovascular endurance, 50-m run and balance, but not for body composition, overhand throw or standing broad jump. Between the two groups, performances were significantly different for all measures, except body composition. These findings confirm the impact of LMC on fitness measures and skill performances over time.

  18. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    PubMed

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  19. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  20. The GH-IGF-I response to typical field sports practices in adolescent athletes: a summary.

    PubMed

    Eliakim, Alon; Cooper, Dan M; Nemet, Dan

    2014-11-01

    The present study compares previous reports on the effect of "real-life" typical field individual (i.e., cross-country running and wrestling--representing combat versus noncombat sports) and team sports (i.e., volleyball and water polo-representing water and land team sports) training on GH and IGF-1, the main growth factors of the GH→IGF axis, in male and female late pubertal athletes. Cross-country running practice and volleyball practice in both males and females were associated with significant increases of circulating GH levels, while none of the practices led to a significant increase in IGF-I levels. The magnitude (percent change) of the GH response to the different practices was determined mainly by preexercise GH levels. There was no difference in the training-associated GH response between individual and team sports practices. The GH response to the different typical practices was not influenced by the practice-associated lactate change. Further studies are needed to better understand the effect of real-life typical training in prepubertal and adolescent athletes and their role in exercise adaptations.

  1. 76 FR 52972 - United States v. Regal Beloit Corp. and A.O. Smith Corp.; Proposed Final Judgment and Competitive...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-24

    ... magnet technology, thereby allowing the motor to run more efficiently. 15. Motors sold for use in pool...-efficient motors because pool pumps typically run for many hours a day, sometimes even continuously. Pool... and fan blades are among the more difficult design aspects of furnace draft inducers. 51. Furnaces are...

  2. Data Driven Smart Proxy for CFD Application of Big Data Analytics & Machine Learning in Computational Fluid Dynamics, Report Two: Model Building at the Cell Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ansari, A.; Mohaghegh, S.; Shahnam, M.

    To ensure the usefulness of simulation technologies in practice, their credibility needs to be established with Uncertainty Quantification (UQ) methods. In this project, smart proxy is introduced to significantly reduce the computational cost of conducting large number of multiphase CFD simulations, which is typically required for non-intrusive UQ analysis. Smart proxy for CFD models are developed using pattern recognition capabilities of Artificial Intelligence (AI) and Data Mining (DM) technologies. Several CFD simulation runs with different inlet air velocities for a rectangular fluidized bed are used to create a smart CFD proxy that is capable of replicating the CFD results formore » the entire geometry and inlet velocity range. The smart CFD proxy is validated with blind CFD runs (CFD runs that have not played any role during the development of the smart CFD proxy). The developed and validated smart CFD proxy generates its results in seconds with reasonable error (less than 10%). Upon completion of this project, UQ studies that rely on hundreds or thousands of smart CFD proxy runs can be accomplished in minutes. Following figure demonstrates a validation example (blind CFD run) showing the results from the MFiX simulation and the smart CFD proxy for pressure distribution across a fluidized bed at a given time-step (the layer number corresponds to the vertical location in the bed).« less

  3. High heat flux measurements and experimental calibrations/characterizations

    NASA Technical Reports Server (NTRS)

    Kidd, Carl T.

    1992-01-01

    Recent progress in techniques employed in the measurement of very high heat-transfer rates in reentry-type facilities at the Arnold Engineering Development Center (AEDC) is described. These advances include thermal analyses applied to transducer concepts used to make these measurements; improved heat-flux sensor fabrication methods, equipment, and procedures for determining the experimental time response of individual sensors; performance of absolute heat-flux calibrations at levels above 2,000 Btu/cu ft-sec (2.27 kW/cu cm); and innovative methods of performing in-situ run-to-run characterizations of heat-flux probes installed in the test facility. Graphical illustrations of the results of extensive thermal analyses of the null-point calorimeter and coaxial surface thermocouple concepts with application to measurements in aerothermal test environments are presented. Results of time response experiments and absolute calibrations of null-point calorimeters and coaxial thermocouples performed in the laboratory at intermediate to high heat-flux levels are shown. Typical AEDC high-enthalpy arc heater heat-flux data recently obtained with a Calspan-fabricated null-point probe model are included.

  4. Kaiser Permanente/Sandia National health care model. Phase I prototype final report. Part 1 - model overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.; Yoshimura, A.; Butler, D.

    1996-11-01

    This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C++, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timing calculationsmore » showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology. This report is published as two documents: Model Overview and Domain Analysis. A separate Kaiser-proprietary report contains the Disease and Health Care Organization Selection Models.« less

  5. Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration

    NASA Astrophysics Data System (ADS)

    Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.

    2017-06-01

    Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.

  6. Producing genome structure populations with the dynamic and automated PGS software.

    PubMed

    Hua, Nan; Tjong, Harianto; Shin, Hanjun; Gong, Ke; Zhou, Xianghong Jasmine; Alber, Frank

    2018-05-01

    Chromosome conformation capture technologies such as Hi-C are widely used to investigate the spatial organization of genomes. Because genome structures can vary considerably between individual cells of a population, interpreting ensemble-averaged Hi-C data can be challenging, in particular for long-range and interchromosomal interactions. We pioneered a probabilistic approach for the generation of a population of distinct diploid 3D genome structures consistent with all the chromatin-chromatin interaction probabilities from Hi-C experiments. Each structure in the population is a physical model of the genome in 3D. Analysis of these models yields new insights into the causes and the functional properties of the genome's organization in space and time. We provide a user-friendly software package, called PGS, which runs on local machines (for practice runs) and high-performance computing platforms. PGS takes a genome-wide Hi-C contact frequency matrix, along with information about genome segmentation, and produces an ensemble of 3D genome structures entirely consistent with the input. The software automatically generates an analysis report, and provides tools to extract and analyze the 3D coordinates of specific domains. Basic Linux command-line knowledge is sufficient for using this software. A typical running time of the pipeline is ∼3 d with 300 cores on a computer cluster to generate a population of 1,000 diploid genome structures at topological-associated domain (TAD)-level resolution.

  7. Do Running Kinematic Characteristics Change over a Typical HIIT for Endurance Runners?

    PubMed

    García-Pinillos, Felipe; Soto-Hermoso, Víctor M; Latorre-Román, Pedro Á

    2016-10-01

    García-Pinillos, F, Soto-Hermoso, VM, and Latorre-Román, PÁ. Do running kinematic characteristics change over a typical HIIT for endurance runners?. J Strength Cond Res 30(10): 2907-2917, 2016-The purpose of this study was to describe kinematic changes that occur during a common high-intensity intermittent training (HIIT) session for endurance runners. Twenty-eight male endurance runners participated in this study. A high-speed camera was used to measure sagittal-plane kinematics at the first and the last run during a HIIT (4 × 3 × 400 m). The dependent variables were spatial-temporal variables, joint angles during support and swing, and foot strike pattern. Physiological variables, rate of perceived exertion, and athletic performance were also recorded. No significant changes (p ≥ 0.05) in kinematic variables were found during the HIIT session. Two cluster analyses were performed, according to the average running pace-faster vs. slower, and according to exhaustion level reached-exhausted group vs. nonexhausted group (NEG). At first run, no significant differences were found between groups. As for the changes induced by the running protocol, significant differences (p ≤ 0.05) were found between faster and slower athletes at toe-off in θhip and θknee, whereas some changes were found in NEG in θhip during toe-off (+4.3°) and θknee at toe-off (-5.2°) during swing. The results show that a common HIIT session for endurance runners did not consistently or substantially perturb the running kinematics of trained male runners. Additionally, although some differences between groups have been found, neither athletic performance nor exhaustion level reached seems to be determinant in the kinematic response during a HIIT, at least for this group of moderately trained endurance runners.

  8. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  9. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.

  10. Rail-dbGaP: analyzing dbGaP-protected data in the cloud with Amazon Elastic MapReduce.

    PubMed

    Nellore, Abhinav; Wilks, Christopher; Hansen, Kasper D; Leek, Jeffrey T; Langmead, Ben

    2016-08-15

    Public archives contain thousands of trillions of bases of valuable sequencing data. More than 40% of the Sequence Read Archive is human data protected by provisions such as dbGaP. To analyse dbGaP-protected data, researchers must typically work with IT administrators and signing officials to ensure all levels of security are implemented at their institution. This is a major obstacle, impeding reproducibility and reducing the utility of archived data. We present a protocol and software tool for analyzing protected data in a commercial cloud. The protocol, Rail-dbGaP, is applicable to any tool running on Amazon Web Services Elastic MapReduce. The tool, Rail-RNA v0.2, is a spliced aligner for RNA-seq data, which we demonstrate by running on 9662 samples from the dbGaP-protected GTEx consortium dataset. The Rail-dbGaP protocol makes explicit for the first time the steps an investigator must take to develop Elastic MapReduce pipelines that analyse dbGaP-protected data in a manner compliant with NIH guidelines. Rail-RNA automates implementation of the protocol, making it easy for typical biomedical investigators to study protected RNA-seq data, regardless of their local IT resources or expertise. Rail-RNA is available from http://rail.bio Technical details on the Rail-dbGaP protocol as well as an implementation walkthrough are available at https://github.com/nellore/rail-dbgap Detailed instructions on running Rail-RNA on dbGaP-protected data using Amazon Web Services are available at http://docs.rail.bio/dbgap/ : anellore@gmail.com or langmea@cs.jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  11. Forefoot running improves pain and disability associated with chronic exertional compartment syndrome.

    PubMed

    Diebal, Angela R; Gregory, Robert; Alitz, Curtis; Gerber, J Parry

    2012-05-01

    Anterior compartment pressures of the leg as well as kinematic and kinetic measures are significantly influenced by running technique. It is unknown whether adopting a forefoot strike technique will decrease the pain and disability associated with chronic exertional compartment syndrome (CECS) in hindfoot strike runners. For people who have CECS, adopting a forefoot strike running technique will lead to decreased pain and disability associated with this condition. Case series; Level of evidence, 4. Ten patients with CECS indicated for surgical release were prospectively enrolled. Resting and postrunning compartment pressures, kinematic and kinetic measurements, and self-report questionnaires were taken for all patients at baseline and after 6 weeks of a forefoot strike running intervention. Run distance and reported pain levels were recorded. A 15-point global rating of change (GROC) scale was used to measure perceived change after the intervention. After 6 weeks of forefoot run training, mean postrun anterior compartment pressures significantly decreased from 78.4 ± 32.0 mm Hg to 38.4 ± 11.5 mm Hg. Vertical ground-reaction force and impulse values were significantly reduced. Running distance significantly increased from 1.4 ± 0.6 km before intervention to 4.8 ± 0.5 km 6 weeks after intervention, while reported pain while running significantly decreased. The Single Assessment Numeric Evaluation (SANE) significantly increased from 49.9 ± 21.4 to 90.4 ± 10.3, and the Lower Leg Outcome Survey (LLOS) significantly increased from 67.3 ± 13.7 to 91.5 ± 8.5. The GROC scores at 6 weeks after intervention were between 5 and 7 for all patients. One year after the intervention, the SANE and LLOS scores were greater than reported during the 6-week follow-up. Two-mile run times were also significantly faster than preintervention values. No patient required surgery. In 10 consecutive patients with CECS, a 6-week forefoot strike running intervention led to decreased postrunning lower leg intracompartmental pressures. Pain and disability typically associated with CECS were greatly reduced for up to 1 year after intervention. Surgical intervention was avoided for all patients.

  12. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  13. Seismic wave propagation modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, E.M.; Olsen, K.B.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). A hybrid, finite-difference technique was developed for modeling nonlinear soil amplification from three-dimensional, finite-fault radiation patters for earthquakes in arbitrary earth models. The method was applied to the 17 January 1994 Northridge earthquake. Particle velocities were computed on a plane at 5-km depth, immediately above the causative fault. Time-series of the strike-perpendicular, lateral velocities then were propagated vertically in a soil column typical of the San Fernando Valley. Suitable material models were adapted from a suite used tomore » model ground motions at the US Nevada Test Site. The effects of nonlinearity reduced relative spectral amplitudes by about 40% at frequencies above 1.5 Hz but only by 10% at lower frequencies. Runs made with source-depth amplitudes increased by a factor of two showed relative amplitudes above 1.5 Hz reduced by a total of 70% above 1.5 Hz and 20% at lower frequencies. Runs made with elastic-plastic material models showed similar behavior to runs made with Masing-Rule models.« less

  14. Variability of GPS-derived running performance during official matches in elite professional soccer players.

    PubMed

    Al Haddad, Hani; Méndez-Villanueva, Alberto; Torreño, Nacho; Munguía-Izquierdo, Diego; Suárez-Arrones, Luis

    2017-09-22

    The aim of this study was to assess the match-to-match variability obtained using GPS devices, collected during official games in professional soccer players. GPS-derived data from nineteen elite soccer players were collected over two consecutive seasons. Time-motion data for players with more than five full-match were analyzed (n=202). Total distance covered (TD), TD >13-18 km/h, TD >18-21 km/h, TD >21 km/h, number of acceleration >2.5-4 m.s-2 and >4 m.s-2 were calculated. The match-to-match variation in running activity was assessed by the typical error expressed as a coefficient of variation (CV,%) and the magnitude of the CV was calculated (effect size). When all players were pooled together, CVs ranged from 5% to 77% (first half) and from 5% to 90% (second half), for TD and number of acceleration >4 m.s-2, and the magnitude of the CVs were rated from small to moderate (effect size = 0.57-0.98). The CVs were likely to increase with running/acceleration intensity, and were likely to differ between playing positions (e.g., TD > 13-18 km/h 3.4% for second strikers vs 14.2% for strikers and 14.9% for wide-defenders vs 9.7% for wide-midfielders). Present findings indicate that variability in players' running performance is high in some variables and likely position-dependent. Such variability should be taken into account when using these variables to prescribe and/or monitor training intensity/load. GPS-derived match-to-match variability in official games' locomotor performance of professional soccer players is high in some variables, particularly for high-speed running, due to the complexity of match running performance and its most influential factors and reliability of the devices.

  15. Improving overly manufacturing metrics through application of feedforward mask-bias

    NASA Astrophysics Data System (ADS)

    Joubert, Etienne; Pellegrini, Joseph C.; Misra, Manish; Sturtevant, John L.; Bernhard, John M.; Ong, Phu; Crawshaw, Nathan K.; Puchalski, Vern

    2003-06-01

    Traditional run-to-run controllers that rely on highly correlated historical events to forecast process corrections have been shown to provide substantial benefit over manual control in the case of a fab that is primarily manufacturing high volume, frequent running parts (i.e., DRAM, MPU, and similar operations). However, a limitation of the traditional controller emerges when it is applied to a fab whose work in process (WIP) is composed of primarily short-running, high part count products (typical of foundries and ASIC fabs). This limitation exists because there is a strong likelihood that each reticle has a unique set of process corrections different from other reticles at the same process layer. Further limitations exist when it is realized that each reticle is loaded and aligned differently on multiple exposure tools.A structural change in how the run-to-run controller manages the frequent reticle changes associated with the high part count environment has allowed for breakthrough performance to be achieved. This breakthrough was mad possible by the realization that; 1. Reticle sourced errors were highly stable over long periods of time, thus allowing them to be deconvolved from the day to day tool and process drifts. 2. Reticle sourced errors can be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams. In this paper, we show how to deconvolve the static (reticle) and dynamic (day to day tool and process) components from the overall error vector to better forecast feedback for existing products as well as how to compute or learn these values for new product introductions - or new tool startups. Manufacturing data will presented to support this discussion with some real world success stories.

  16. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  18. Motor Learning: An Analysis of 100 Trials of a Ski Slalom Game in Children with and without Developmental Coordination Disorder

    PubMed Central

    Smits-Engelsman, Bouwien C. M.; Jelsma, Lemke Dorothee; Ferguson, Gillian D.; Geuze, Reint H.

    2015-01-01

    Objective Although Developmental Coordination Disorder (DCD) is often characterized as a skill acquisition deficit disorder, few studies have addressed the process of motor learning. This study examined learning of a novel motor task; the Wii Fit ski slalom game. The main objectives were to determine: 1) whether learning occurs over 100 trial runs of the game, 2) if the learning curve is different between children with and without DCD, 3) if learning is different in an easier or harder version of the task, 4) if learning transfers to other balance tasks. Method 17 children with DCD (6–10 years) and a matched control group of 17 typically developing (TD) children engaged in 20 minutes of gaming, twice a week for five weeks. Each training session comprised of alternating trial runs, with five runs at an easy level and five runs at a difficult level. Wii scores, which combine speed and accuracy per run, were recorded. Standardized balance tasks were used to measure transfer. Results Significant differences in initial performance were found between groups on the Wii score and balance tasks. Both groups improved their Wii score over the five weeks. Improvement in the easy and in the hard task did not differ between groups. Retention in the time between training sessions was not different between TD and DCD groups either. The DCD group improved significantly on all balance tasks. Conclusions The findings in this study give a fairly coherent picture of the learning process over a medium time scale (5 weeks) in children novice to active computer games; they learn, retain and there is evidence of transfer to other balance tasks. The rate of motor learning is similar for those with and without DCD. Our results raise a number of questions about motor learning that need to be addressed in future research. PMID:26466324

  19. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  20. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI.

    PubMed

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert

    2016-04-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Joint Eglin Acoustics Week 2013 Data Report

    DTIC Science & Technology

    2017-10-01

    during this test. The M-model HH-60 (Tail Number 04-27001), with the new wide-chord blade that is principally characterized by its unique tapered...cards located within each remote unit. Upon termination of each run , sufficient data metrics and system health information are transmitted back to the...command computer to assure that good data were acquired at each microphone station during the run . A typical WAMS microphone station deployment is

  2. Deformable mirror technologies at AOA Xinetics

    NASA Astrophysics Data System (ADS)

    Wirth, Allan; Cavaco, Jeffrey; Bruno, Theresa; Ezzo, Kevin M.

    2013-05-01

    AOA Xinetics (AOX) has been at the forefront of Deformable Mirror (DM) technology development for over two decades. In this paper the current state of that technology is reviewed and the particular strengths and weaknesses of the various DM architectures are presented. Emphasis is placed on the requirements for DMs applied to the correction of high-energy and high average power lasers. Mirror designs optimized for the correction of typical thermal lensing effects in diode pumped solid-state lasers will be detailed and their capabilities summarized. Passive thermal management techniques that allow long laser run times to be supported will also be discussed.

  3. Space commercialization: Analysis of R and D investments with long time horizons

    NASA Technical Reports Server (NTRS)

    Sheahen, T. P.

    1984-01-01

    By following a single hypothetical example through a series of variations, the way different potential investors might look at the opportunity to participate in space commercialization is described. The example itself is fairly typical of commercial opportunities in space. The chief characteristics are a steadily increasing requirement for capital infusion over an 8 year period, followed by a very generous stream of profits running another decade or more beyond. There is a decision point at 3 years, at the conclusion of laboratory R&D; and another at 6 years, following 2 initial space flights.

  4. Parallelization and visual analysis of multidimensional fields: Application to ozone production, destruction, and transport in three dimensions

    NASA Technical Reports Server (NTRS)

    Schwan, Karsten

    1994-01-01

    Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.

  5. Prescribed and self-reported seasonal training of distance runners.

    PubMed

    Hewson, D J; Hopkins, W G

    1995-12-01

    A survey of 123 distance-running coaches and their best runners was undertaken to describe prescribed seasonal training and its relationship to the performance and self-reported training of the runners. The runners were 43 females and 80 males, aged 24 +/- 8 years (mean +/- S.D.), training for events from 800 m to the marathon, with seasonal best paces of 86 +/- 6% of sex- and age-group world records. The coaches and runners completed a questionnaire on typical weekly volumes of interval and strength training, and typical weekly volumes and paces of moderate and hard continuous running, for build-up, pre-competition, competition and post-competition phases of a season. Prescribed training decreased in volume and increased in intensity from the build-up through to the competition phase, and had similarities with 'long slow distance' training. Coaches of the faster runners prescribed longer build-ups, greater volumes of moderate continuous running and slower relative paces of continuous running (r = 0.19-0.36, P < 0.05), suggesting beneficial effects of not training close to competition pace. The mean training volumes and paces prescribed by the coaches were similar to those reported by the runners, but the correlations between prescribed and reported training were poor (r = 0.2-0.6). Coaches may therefore need to monitor their runners' training more closely.

  6. Predictable turn-around time for post tape-out flow

    NASA Astrophysics Data System (ADS)

    Endo, Toshikazu; Park, Minyoung; Ghosh, Pradiptya

    2012-03-01

    A typical post-out flow data path at the IC Fabrication has following major components of software based processing - Boolean operations before the application of resolution enhancement techniques (RET) and optical proximity correctin (OPC), the RET and OPC step [etch retargeting, sub-resolution assist feature insertion (SRAF) and OPC], post-OPCRET Boolean operations and sometimes in the same flow simulation based verification. There are two objectives that an IC Fabrication tapeout flow manager wants to achieve with the flow - predictable completion time and fastest turn-around time (TAT). At times they may be competing. There have been studies in the literature modeling the turnaround time from historical data for runs with the same recipe and later using that to derive the resource allocation for subsequent runs. [3]. This approach is more feasible in predominantly simulation dominated tools but for edge operation dominated flow it may not be possible especially if some processing acceleration methods like pattern matching or hierarchical processing is involved. In this paper, we suggest an alternative method of providing target turnaround time and managing the priority of jobs while not doing any upfront resource modeling and resource planning. The methodology then systematically either meets the turnaround time need and potentially lets the user know if it will not as soon as possible. This builds on top of the Calibre Cluster Management (CalCM) resource management work previously published [1][2]. The paper describes the initial demonstration of the concept.

  7. CHEETAH: A next generation thermochemical code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.; Souers, P.

    1994-11-01

    CHEETAH is an effort to bring the TIGER thermochemical code into the 1990s. A wide variety of improvements have been made in Version 1.0. We have improved the robustness and ease of use of TIGER. All of TIGER`s solvers have been replaced by new algorithms. We find that CHEETAH solves a wider variety of problems with no user intervention (e.g. no guesses for the C-J state) than TIGER did. CHEETAH has been made simpler to use than TIGER; typical use of the code occurs with the new standard run command. CHEETAH will make the use of thermochemical codes more attractivemore » to practical explosive formulators. We have also made an extensive effort to improve over the results of TIGER. CHEETAH`s version of the BKW equation of state (BKWC) is able to accurately reproduce energies from cylinder tests; something that other BKW parameter sets have been unable to do. Calculations performed with BKWC execute very quickly; typical run times are under 10 seconds on a workstation. In the future we plan to improve the underlying science in CHEETAH. More accurate equations of state will be used in the gas and the condensed phase. A kinetics capability will be added to the code that will predict reaction zone thickness. Further ease of use features will eventually be added; an automatic formulator that adjusts concentrations to match desired properties is planned.« less

  8. Short-term Memory in Childhood Dyslexia: Deficient Serial Order in Multiple Modalities.

    PubMed

    Cowan, Nelson; Hogan, Tiffany P; Alt, Mary; Green, Samuel; Cabbage, Kathryn L; Brinkley, Shara; Gray, Shelley

    2017-08-01

    In children with dyslexia, deficits in working memory have not been well-specified. We assessed second-grade children with dyslexia, with and without concomitant specific language impairment, and children with typical development. Immediate serial recall of lists of phonological (non-word), lexical (digit), spatial (location) and visual (shape) items were included. For the latter three modalities, we used not only standard span but also running span tasks, in which the list length was unpredictable to limit mnemonic strategies. Non-word repetition tests indicated a phonological memory deficit in children with dyslexia alone compared with those with typical development, but this difference vanished when these groups were matched for non-verbal intelligence and language. Theoretically important deficits in serial order memory in dyslexic children, however, persisted relative to matched typically developing children. The deficits were in recall of (1) spoken digits in both standard and running span tasks and (2) spatial locations, in running span only. Children with dyslexia with versus without language impairment, when matched on non-verbal intelligence, had comparable serial order memory, but differed in phonology. Because serial orderings of verbal and spatial elements occur in reading, the careful examination of order memory may allow a deeper understanding of dyslexia and its relation to language impairment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Documentation of a restart option for the U.S. Geological Survey coupled Groundwater and Surface-Water Flow (GSFLOW) model

    USGS Publications Warehouse

    Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.

    2015-10-02

    The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.

  10. Online evolution reconstruction from a single measurement record with random time intervals for quantum communication

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong

    2017-10-01

    Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.

  11. Fast algorithm for spectral processing with application to on-line welding quality assurance

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  12. Robust algorithm for aligning two-dimensional chromatograms.

    PubMed

    Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel

    2012-11-06

    Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.

  13. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  14. All-sky search for gravitational-wave bursts in the second joint LIGO-Virgo run

    NASA Astrophysics Data System (ADS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Arain, M. A.; Araya, M. C.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Belletoile, A.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannizzo, J.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chaibi, O.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chelkowski, S.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Del Pozzo, W.; del Prete, M.; Dent, T.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Flanigan, M.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gemme, G.; Geng, R.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Gray, N.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Ha, T.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Hardt, A.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kranz, O.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, R.; Kwee, P.; Lam, P. K.; Landry, M.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Liguori, N.; Lindquist, P. E.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Luan, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marandi, A.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McKechan, D. J. A.; McWilliams, S.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Peiris, P.; Pekowsky, L.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Redwine, K.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sainathan, P.; Salemi, F.; Sammut, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Soto, J.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Stein, A. J.; Stein, L. C.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tseng, K.; Tucker, E.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wang, Z.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yu, P.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhang, W.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2012-06-01

    We present results from a search for gravitational-wave bursts in the data collected by the LIGO and Virgo detectors between July 7, 2009 and October 20, 2010: data are analyzed when at least two of the three LIGO-Virgo detectors are in coincident operation, with a total observation time of 207 days. The analysis searches for transients of duration ≲1s over the frequency band 64-5000 Hz, without other assumptions on the signal waveform, polarization, direction or occurrence time. All identified events are consistent with the expected accidental background. We set frequentist upper limits on the rate of gravitational-wave bursts by combining this search with the previous LIGO-Virgo search on the data collected between November 2005 and October 2007. The upper limit on the rate of strong gravitational-wave bursts at the Earth is 1.3 events per year at 90% confidence. We also present upper limits on source rate density per year and Mpc3 for sample populations of standard-candle sources. As in the previous joint run, typical sensitivities of the search in terms of the root-sum-squared strain amplitude for these waveforms lie in the range ˜5×10-22Hz-1/2 to ˜1×10-20Hz-1/2. The combination of the two joint runs entails the most sensitive all-sky search for generic gravitational-wave bursts and synthesizes the results achieved by the initial generation of interferometric detectors.

  15. Body-terrain interaction affects large bump traversal of insects and legged robots.

    PubMed

    Gart, Sean W; Li, Chen

    2018-02-02

    Small animals and robots must often rapidly traverse large bump-like obstacles when moving through complex 3D terrains, during which, in addition to leg-ground contact, their body inevitably comes into physical contact with the obstacles. However, we know little about the performance limits of large bump traversal and how body-terrain interaction affects traversal. To address these, we challenged the discoid cockroach and an open-loop six-legged robot to dynamically run into a large bump of varying height to discover the maximal traversal performance, and studied how locomotor modes and traversal performance are affected by body-terrain interaction. Remarkably, during rapid running, both the animal and the robot were capable of dynamically traversing a bump much higher than its hip height (up to 4 times the hip height for the animal and 3 times for the robot, respectively) at traversal speeds typical of running, with decreasing traversal probability with increasing bump height. A stability analysis using a novel locomotion energy landscape model explained why traversal was more likely when the animal or robot approached the bump with a low initial body yaw and a high initial body pitch, and why deflection was more likely otherwise. Inspired by these principles, we demonstrated a novel control strategy of active body pitching that increased the robot's maximal traversable bump height by 75%. Our study is a major step in establishing the framework of locomotion energy landscapes to understand locomotion in complex 3D terrains.

  16. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  17. Parametric optimisation of heat treated recycling aluminium (AA6061) by response surface methodology

    NASA Astrophysics Data System (ADS)

    Ahmad, A.; Lajis, M. A.; Yusuf, N. K.; Shamsudin, S.; Zhong, Z. W.

    2017-09-01

    Alternating typical primary aluminium production with recycling route should benefit various parties, including the environment since the need of high cost and massive energy consumption will be ruled out. At present, hot extrusion is preferred as the effective solid-state recycling process compared to the typical method of melting the swarf at high temperature. However, the ideal properties of extruded product can only be achieved through a controlled process used to alter the microstructure to impart properties which benefit the working life of a component, which also known as heat treatment process. To that extent, this work ought to investigate the effect of extrusion temperature and ageing time on the hardness of the recycled aluminium chips. By employing Analysis of Variance (ANOVA) for full factorial design with centre point, a total of 11 runs were carried out randomly. Three dissimilar extrusion temperatures were used to obtain gear-shape billet. Extruded billets were cut and ground before entering the treatment phase at three different ageing times. Ageing time was found as the influential factor to affect the material hardness, rather than the extrusion temperature. Sufficient ageing time allows the impurity atoms to interfere the dislocation phenomena and yield great hardness. Yet, the extrusion temperatures still act to assist the bonding activities via interparticle diffusion transport matter.

  18. The Amateur Scientist.

    ERIC Educational Resources Information Center

    Walker, Jearl

    1983-01-01

    Water striders are insects that walk and run on the surface of water. Discusses the morphology, physiology, and behavior of these insects. Includes diagrams of stages in the movement of a typical strider. (JN)

  19. Static and Dynamic Frequency Scaling on Multicore CPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Hong, Changwan; Chunduri, Sudheer

    2016-12-28

    Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying a processor’s operating frequency (and the associated voltage). Typical approaches employing DVFS involve default strategies such as running at the lowest or the highest frequency, or observing the CPU’s runtime behavior and dynamically adapting the voltage/frequency configuration based on CPU usage. In this paper, we argue that many previous approaches suffer from inherent limitations, such as not account- ing for processor-specific impact of frequency changes on energy for different workload types. We first propose a lightweight runtime-based approach to automatically adapt the frequency based on the CPU workload,more » that is agnostic of the processor characteristics. We then show that further improvements can be achieved for affine kernels in the application, using a compile-time characterization instead of run-time monitoring to select the frequency and number of CPU cores to use. Our framework relies on a one-time energy characterization of CPU-specific DVFS profiles followed by a compile-time categorization of loop-based code segments in the application. These are combined to determine a priori of the frequency and the number of cores to use to execute the application so as to optimize energy or energy-delay product, outperforming runtime approach. Extensive evaluation on 60 benchmarks and five multi-core CPUs show that our approach systematically outperforms the powersave Linux governor, while improving overall performance.« less

  20. Continued Development of a Global Heat Transfer Measurement System at AEDC Hypervelocity Wind Tunnel 9

    NASA Technical Reports Server (NTRS)

    Kurits, Inna; Lewis, M. J.; Hamner, M. P.; Norris, Joseph D.

    2007-01-01

    Heat transfer rates are an extremely important consideration in the design of hypersonic vehicles such as atmospheric reentry vehicles. This paper describes the development of a data reduction methodology to evaluate global heat transfer rates using surface temperature-time histories measured with the temperature sensitive paint (TSP) system at AEDC Hypervelocity Wind Tunnel 9. As a part of this development effort, a scale model of the NASA Crew Exploration Vehicle (CEV) was painted with TSP and multiple sequences of high resolution images were acquired during a five run test program. Heat transfer calculation from TSP data in Tunnel 9 is challenging due to relatively long run times, high Reynolds number environment and the desire to utilize typical stainless steel wind tunnel models used for force and moment testing. An approach to reduce TSP data into convective heat flux was developed, taking into consideration the conditions listed above. Surface temperatures from high quality quantitative global temperature maps acquired with the TSP system were then used as an input into the algorithm. Preliminary comparison of the heat flux calculated using the TSP surface temperature data with the value calculated using the standard thermocouple data is reported.

  1. Simulating an Exploding Fission-Bomb Core

    NASA Astrophysics Data System (ADS)

    Reed, Cameron

    2016-03-01

    A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.

  2. Dynamically Alterable Arrays of Polymorphic Data Types

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    An application library package was developed that represents data packets for Deep Space Network (DSN) message packets as dynamically alterable arrays composed of arbitrary polymorphic data types. The software was to address a limitation of the present state of the practice for having an array directly composed of a single monomorphic data type. This is a severe limitation when one is dealing with science data in that the types of objects one is dealing with are typically not known in advance and, therefore, are dynamic in nature. The unique feature of this approach is that it enables one to define at run-time the dynamic shape of the matrix with the ability to store polymorphic data types in each of its indices. Existing languages such as C and C++ have the restriction that the shape of the array must be known in advance and each of its elements be a monomorphic data type that is strictly defined at compile-time. This program can be executed on a variety of platforms. It can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  3. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Lightweight fuzzy processes in clinical computing.

    PubMed

    Hurdle, J F

    1997-09-01

    In spite of advances in computing hardware, many hospitals still have a hard time finding extra capacity in their production clinical information system to run artificial intelligence (AI) modules, for example: to support real-time drug-drug or drug-lab interactions; to track infection trends; to monitor compliance with case specific clinical guidelines; or to monitor/ control biomedical devices like an intelligent ventilator. Historically, adding AI functionality was not a major design concern when a typical clinical system is originally specified. AI technology is usually retrofitted 'on top of the old system' or 'run off line' in tandem with the old system to ensure that the routine work load would still get done (with as little impact from the AI side as possible). To compound the burden on system performance, most institutions have witnessed a long and increasing trend for intramural and extramural reporting, (e.g. the collection of data for a quality-control report in microbiology, or a meta-analysis of a suite of coronary artery bypass grafts techniques, etc.) and these place an ever-growing burden on typical the computer system's performance. We discuss a promising approach to adding extra AI processing power to a heavily-used system based on the notion 'lightweight fuzzy processing (LFP)', that is, fuzzy modules designed from the outset to impose a small computational load. A formal model for a useful subclass of fuzzy systems is defined below and is used as a framework for the automated generation of LFPs. By seeking to reduce the arithmetic complexity of the model (a hand-crafted process) and the data complexity of the model (an automated process), we show how LFPs can be generated for three sample datasets of clinical relevance.

  5. Numerical simulation of MPD thruster flows with anomalous transport

    NASA Technical Reports Server (NTRS)

    Caldo, Giuliano; Choueiri, Edgar Y.; Kelly, Arnold J.; Jahn, Robert G.

    1992-01-01

    Anomalous transport effects in an Ar self-field coaxial MPD thruster are presently studied by means of a fully 2D two-fluid numerical code; its calculations are extended to a range of typical operating conditions. An effort is made to compare the spatial distribution of the steady state flow and field properties and thruster power-dissipation values for simulation runs with and without anomalous transport. A conductivity law based on the nonlinear saturation of lower hybrid current-driven instability is used for the calculations. Anomalous-transport simulation runs have indicated that the resistivity in specific areas of the discharge is significantly higher than that calculated in classical runs.

  6. An ultra low noise telecom wavelength free running single photon detector using negative feedback avalanche diode

    NASA Astrophysics Data System (ADS)

    Yan, Zhizhong; Hamel, Deny R.; Heinrichs, Aimee K.; Jiang, Xudong; Itzler, Mark A.; Jennewein, Thomas

    2012-07-01

    It is challenging to implement genuine free running single-photon detectors for the 1550 nm wavelength range with simultaneously high detection efficiency (DE), low dark noise, and good time resolution. We report a novel read out system for the signals from a negative feedback avalanche diode (NFAD) [M. A. Itzler, X. Jiang, B. Nyman, and K. Slomkowski, "Quantum sensing and nanophotonic devices VI," Proc. SPIE 7222, 72221K (2009), 10.1117/12.814669; X. Jiang, M. A. Itzler, K. ODonnell, M. Entwistle, and K. Slomkowski, "Advanced photon counting techniques V," Proc. SPIE 8033, 80330K (2011), 10.1117/12.883543; M. A. Itzler, X. Jiang, B. M. Onat, and K. Slomkowski, "Quantum sensing and nanophotonic devices VII," Proc. SPIE 7608, 760829 (2010), 10.1117/12.843588], which allows useful operation of these devices at a temperature of 193 K and results in very low darkcounts (˜100 counts per second (CPS)), good time jitter (˜30 ps), and good DE (˜10%). We characterized two NFADs with a time-correlation method using photons generated from weak coherent pulses and photon pairs produced by spontaneous parametric down conversion. The inferred detector efficiencies for both types of photon sources agree with each other. The best noise equivalent power of the device is estimated to be 8.1 × 10-18 W Hz-1/2, more than 10 times better than typical InP/InGaAs single photon avalanche diodes (SPADs) show in free running mode. The afterpulsing probability was found to be less than 0.1% per ns at the optimized operating point. In addition, we studied the performance of an entanglement-based quantum key distribution (QKD) using these detectors and develop a model for the quantum bit error rate that incorporates the afterpulsing coefficients. We verified experimentally that using these NFADs it is feasible to implement QKD over 400 km of telecom fiber. Our NFAD photon detector system is very simple, and is well suited for single-photon applications where ultra-low noise and free-running operation is required, and some afterpulsing can be tolerated.

  7. An ultra low noise telecom wavelength free running single photon detector using negative feedback avalanche diode.

    PubMed

    Yan, Zhizhong; Hamel, Deny R; Heinrichs, Aimee K; Jiang, Xudong; Itzler, Mark A; Jennewein, Thomas

    2012-07-01

    It is challenging to implement genuine free running single-photon detectors for the 1550 nm wavelength range with simultaneously high detection efficiency (DE), low dark noise, and good time resolution. We report a novel read out system for the signals from a negative feedback avalanche diode (NFAD) [M. A. Itzler, X. Jiang, B. Nyman, and K. Slomkowski, "Quantum sensing and nanophotonic devices VI," Proc. SPIE 7222, 72221K (2009); X. Jiang, M. A. Itzler, K. ODonnell, M. Entwistle, and K. Slomkowski, "Advanced photon counting techniques V," Proc. SPIE 8033, 80330K (2011); M. A. Itzler, X. Jiang, B. M. Onat, and K. Slomkowski, "Quantum sensing and nanophotonic devices VII," Proc. SPIE 7608, 760829 (2010)], which allows useful operation of these devices at a temperature of 193 K and results in very low darkcounts (∼100 counts per second (CPS)), good time jitter (∼30 ps), and good DE (∼10%). We characterized two NFADs with a time-correlation method using photons generated from weak coherent pulses and photon pairs produced by spontaneous parametric down conversion. The inferred detector efficiencies for both types of photon sources agree with each other. The best noise equivalent power of the device is estimated to be 8.1 × 10(-18) W Hz(-1/2), more than 10 times better than typical InP/InGaAs single photon avalanche diodes (SPADs) show in free running mode. The afterpulsing probability was found to be less than 0.1% per ns at the optimized operating point. In addition, we studied the performance of an entanglement-based quantum key distribution (QKD) using these detectors and develop a model for the quantum bit error rate that incorporates the afterpulsing coefficients. We verified experimentally that using these NFADs it is feasible to implement QKD over 400 km of telecom fiber. Our NFAD photon detector system is very simple, and is well suited for single-photon applications where ultra-low noise and free-running operation is required, and some afterpulsing can be tolerated.

  8. Time left in the mouse.

    PubMed

    Cordes, Sara; King, Adam Philip; Gallistel, C R

    2007-02-22

    Evidence suggests that the online combination of non-verbal magnitudes (durations, numerosities) is central to learning in both human and non-human animals [Gallistel, C.R., 1990. The Organization of Learning. MIT Press, Cambridge, MA]. The molecular basis of these computations, however, is an open question at this point. The current study provides the first direct test of temporal subtraction in a species in which the genetic code is available. In two experiments, mice were run in an adaptation of Gibbon and Church's [Gibbon, J., Church, R.M., 1981. Time left: linear versus logarithmic subjective time. J. Exp. Anal. Behav. 7, 87-107] time left paradigm in order to characterize typical responding in this task. Both experiments suggest that mice engaged in online subtraction of temporal values, although the generalization of a learned response rule to novel stimulus values resulted in slightly less systematic responding. Potential explanations for this pattern of results are discussed.

  9. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  11. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  12. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    PubMed

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  13. Experimental study on anomalous neutron production in deuterium/solid system

    NASA Astrophysics Data System (ADS)

    He, Jianyu; Zhu, Rongbao; Wang, Xiaozhong; Lu, Feng; Luo, Longjun; Liu, Hengjun; Jiang, Jincai; Tian, Baosheng; Chen, Guoan; Yuan, Yuan; Dong, Baiting; Yang, Liucheng; Qiao, Shengzhong; Yi, Guoan; Guo, Hua; Ding, Dazhao; Menlove, H. O.

    1991-05-01

    A series of experiments on both D2O electrolysis and thermal cycle of deuterium absorbed Ti Turnings has been designed to examine the anomalous phenomena in Deuterium/Solid System. A neutron detector containing 16 BF3 tubes with a detection limit of 0.38 n/s for two hour counting was used for electrolysis experiments. No neutron counting rate statistically higher than detection limit was observed from Fleischmann & Pons type experiments. An HLNCC neutron detector equipped with 18 3He tubes and a JSR-11 shift register unit with a detection limit of 0.20 n/s for a two hour run was employed to study the neutron signals in D2 gas experiments. Different material pretreatments were selected to review the changes in frequency and size of the neutron burst production. Experiment sequence was deliberately designed to distinguish the neutron burst from fake signals, e.g. electronic noise pickup, the cosmic rays and other sources of environmental background. Ten batches of dry fusion samples were tested, among them, seven batches with neutron burst signals occurred roughly at the temperature from -100 degree centigrade to near room temperature. In the first four runs of a typical sample batch, seven neutron bursts were observed with neutron numbers from 15 to 482, which are 3 and 75 times, respectively, higher than the uncertainty of background. However, no bursts happened for H2 dummy samples running in-between and afterwards and for sample batch after certain runs.

  14. Molecular Diagnostics for the Study of Hypersonic Flows

    DTIC Science & Technology

    2000-04-01

    between the at the F4 high-enthalpy wind tunnel [21]. Figure 5 electrodes. The fast electrons exit the anode disk shows the image acquired 90 ms after...Discharge Figure 5 Typical F4 run, flow at 90 ms , Grounded Electrode convection imaged 5 jis after beam emission. Figure 4 Schematic diagram of the...accounts for the classical phenomena like absorption and Figure 6 Velocity profile at 90 ms for run of refraction. X(2) is the second-order

  15. A Lossless Network for Data Acquisition

    NASA Astrophysics Data System (ADS)

    Jereczek, Grzegorz; Lehmann Miotto, Giovanna; Malone, David; Walukiewicz, Miroslaw

    2017-06-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial off-the-shelf servers, using the ATLAS experiment as a case study. In this paper, we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on distinct physical servers as a demonstrator.

  16. Evaluation of the Tropical Pacific Observing System from the Data Assimilation Perspective

    DTIC Science & Technology

    2014-01-01

    hereafter, SIDA systems) have the capacity to assimilate salinity profiles imposing a multivariate (mainly T-S) balance relationship (summarized in...Fujii et al., 2011). Current SIDA systems in operational centers generally use Ocean General Circulation Models (OGCM) with resolution typically 1...long-term (typically 20-30 years) ocean DA runs are often performed with SIDA systems in operational centers for validation and calibration of SI

  17. Utility of the Conconi's heart rate deflection to monitor the intensity of aerobic training.

    PubMed

    Passelergue, Philippe A; Cormery, Bruno; Lac, Gérard; Léger, Luc A

    2006-02-01

    The Conconi's heart-rate deflection point (HRd) in the heart rate (HR)/speed curve is often used to set aerobic training loads. Training could either be set in percentage running speed or HR at HRd. In order to establish the limits and usefulness of various aerobic-training modalities for intermediate athletic level (physical-education students), acute responses were analyzed while running for a typical 40-minute training session. Speed, HR, lactate, and cortisol were thus recorded during training at 90 and 100% of running speed (RS: n = 14) and HR (HR: n = 16) at HRd (90% running speed [RS90], 100% running speed [RS100], 90% HR [HR90], and 100% HR [HR100]). During constant HR training, RS decreases while HR drifts upward during constant RS training. Half of the subjects can not finish the 40-minute RS100 session. For HR90, RS90, HR100, and RS100, average intensities are 67, 69, 74.9, and 77% maximal aerobic speed (multistage test), respectively. This study indicates that (1) training at HR100 and RS100 is more appropriate to improve high-intensity metabolic capacities (increased cortisol and lactate) while RS100 is too difficult to be maintained for 40 minutes for subjects at that level at least, (2) training at HR90, however, is better to improve endurance and capacity to do a large amount of work considering cortisol and lactate homeostasis, and (3) training at a constant HR using a HR monitor is a good method to control the intensity of the training with subjects not used to pacing themselves with the split-time approach.

  18. VizieR Online Data Catalog: Ultracool white dwarfs (Gianninas+, 2015)

    NASA Astrophysics Data System (ADS)

    Gianninas, A.; Curd, B.; Thorstensen, J. R.; Kilic, M.; Bergeron, P.; Andrews, J. J.; Canton, P.; Agueros, M. A.

    2015-11-01

    All our parallax data are from the 2.4m Hiltner telescope at Michigan-Dartmouth-MIT (MDM) Observatory on Kitt Peak, Arizona. We used a thinned SITe CCD (named 'echelle'); at the f7.5 focus, each 24um pixel subtended 0.275-arcsec, giving a field of view 9.4arcmin2. For all our parallax data, we used a 4-inch-square Kron-Cousins I-band filter, which did not vignette the CCD. Exposure times varied with the brightness of the object, but were typically a few hundred seconds. Our data were taken on numerous observing runs between 2007 and 2011. (4 data files).

  19. Status of the AFP project in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Taševský, Marek

    2015-04-01

    Status of the AFP project in the ATLAS experiment is summarized. The AFP system is composed of a tracker to detect intact, diffractively scattered protons, and of a time-of-flight detector serving to suppress background from pile-up interactions. The whole system, located around 210 m from the main ATLAS detector, is placed in Roman Pots which move detectors from and to the incident proton beams. A typical distance of the closest approach of the tracker to these beams is 2-3 mm. The main physics motivation lies in measuring diffractive processes in runs with not a very high amount of pile-up.

  20. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  1. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  2. Root canal

    MedlinePlus

    ... top part of your tooth to expose the pulp. This is typically called access. Pulp is made up of nerves, blood vessels, and ... the tooth and runs to the jaw bone. Pulp supplies blood to a tooth and allows you ...

  3. Protocol for evaluating the effects of a therapeutic foot exercise program on injury incidence, foot functionality and biomechanics in long-distance runners: a randomized controlled trial.

    PubMed

    Matias, Alessandra B; Taddei, Ulisses T; Duarte, Marcos; Sacco, Isabel C N

    2016-04-14

    Overall performance, particularly in a very popular sports activity such as running, is typically influenced by the status of the musculoskeletal system and the level of training and conditioning of the biological structures. Any change in the musculoskeletal system's biomechanics, especially in the feet and ankles, will strongly influence the biomechanics of runners, possibly predisposing them to injuries. A thorough understanding of the effects of a therapeutic approach focused on feet biomechanics, on strength and functionality of lower limb muscles will contribute to the adoption of more effective therapeutic and preventive strategies for runners. A randomized, prospective controlled and parallel trial with blind assessment is designed to study the effects of a "ground-up" therapeutic approach focused on the foot-ankle complex as it relates to the incidence of running-related injuries in the lower limbs. One hundred and eleven (111) healthy long-distance runners will be randomly assigned to either a control (CG) or intervention (IG) group. IG runners will participate in a therapeutic exercise protocol for the foot-ankle for 8 weeks, with 1 directly supervised session and 3 remotely supervised sessions per week. After the 8-week period, IG runners will keep exercising for the remaining 10 months of the study, supervised only by web-enabled software three times a week. At baseline, 2 months, 4 months and 12 months, all runners will be assessed for running-related injuries (primary outcome), time for the occurrence of the first injury, foot health and functionality, muscle trophism, intrinsic foot muscle strength, dynamic foot arch strain and lower-limb biomechanics during walking and running (secondary outcomes). This is the first randomized clinical trial protocol to assess the effect of an exercise protocol that was designed specifically for the foot-and-ankle complex on running-related injuries to the lower limbs of long-distance runners. We intend to show that the proposed protocol is an innovative and effective approach to decreasing the incidence of injuries. We also expect a lengthening in the time of occurrence of the first injury, an improvement in foot function, an increase in foot muscle mass and strength and beneficial biomechanical changes while running and walking after a year of exercising. Clinicaltrials.gov Identifier NCT02306148 (November 28, 2014) under the name "Effects of Foot Strengthening on the Prevalence of Injuries in Long Distance Runners". Committee of Ethics in Research of the School of Medicine of the University of Sao Paulo (18/03/2015, Protocol # 031/15).

  4. Running with emotion: when affective content hampers working memory performance.

    PubMed

    Fairfield, Beth; Mammarella, Nicola; Di Domenico, Alberto; Palumbo, Rocco

    2015-03-01

    This study tested the hypothesis that affective content may undermine rather than facilitate working memory (WM) performance. To this end, participants performed a running WM task with positive, negative and neutral words. In typical running memory tasks, participants are presented with lists of unpredictable length and are asked to recall the last three or four items. We found that accuracy with affective words decreased as lists lengthened, whereas list length did not influence recall of neutral words. We interpreted this pattern of results in terms of a limited resource model of WM in which valence represents additional information that needs to be manipulated, especially in the context of difficult trials. © 2014 International Union of Psychological Science.

  5. When does eruption run-up begin? Multidisciplinary insight from the 1999 eruption of Shishaldin volcano

    NASA Astrophysics Data System (ADS)

    Rasmussen, Daniel J.; Plank, Terry A.; Roman, Diana C.; Power, John A.; Bodnar, Robert J.; Hauri, Erik H.

    2018-03-01

    During the run-up to eruption, volcanoes often show geophysically detectable signs of unrest. However, there are long-standing challenges in interpreting the signals and evaluating the likelihood of eruption, especially during the early stages of volcanic unrest. Considerable insight can be gained from combined geochemical and geophysical studies. Here we take such an approach to better understand the beginning of eruption run-up, viewed through the lens of the 1999 sub-Plinian basaltic eruption of Shishaldin volcano, Alaska. The eruption is of interest due to its lack of observed deformation and its apparent long run-up time (9 months), following a deep long-period earthquake swarm. We evaluate the nature and timing of recharge by examining the composition of 138 olivine macrocrysts and 53 olivine-hosted melt inclusions and through shear-wave splitting analysis of regional earthquakes. Magma mixing is recorded in three crystal populations: a dominant population of evolved olivines (Fo60-69) that are mostly reversely zoned, an intermediate population (Fo69-76) with mixed zonation, and a small population of normally zoned more primitive olivines (Fo76-80). Mixing-to-eruption timescales are obtained through modeling of Fe-Mg interdiffusion in 78 olivines. The large number of resultant timescales provides a thorough record of mixing, demonstrating at least three mixing events: a minor event ∼11 months prior to eruption, overlapping within uncertainty with the onset of deep long-period seismicity; a major event ∼50 days before eruption, coincident with a large (M5.2) shallow earthquake; and a final event about a week prior to eruption. Shear-wave splitting analysis shows a change in the orientation of the local stress field about a month after the deep long-period swarm and around the time of the M5.2 event. Earthquake depths and vapor saturation pressures of Raman-reconstructed melt inclusions indicate that the recharge magma originated from depths of at least 20 km, and that mixing with a shallow magma or olivine cumulates occurred in or just below the edifice (<3 km depth). Deformation was likely outside the spatial and temporal resolution of the satellite measurements. Prior to eruption magma was stored over a large range of depths (∼0-2.5 km below the summit), suggesting a shallow, vertical reservoir that could provide another explanation for the lack of detectable deformation. The earliest sign of unrest (deep long-period seismicity) coincides temporally with magmatic activity (magma mixing and a change in the local stress state), possibly indicating the beginning of eruption run-up. The more immediate run-up began with the major recharge event ∼50 days prior to eruption, after which the signs of unrest became continuous. This timescale is long compared to the seismic run-up to other basaltic eruptions (typically hours to days). Other volcanoes classified as open-system, based on their lack of precursory deformation, also tend to have relatively long run-up durations, which may be related to the time required to fill the shallow reservoir with magmas sourced from greater depth.

  6. TEA CO 2 Laser Simulator: A software tool to predict the output pulse characteristics of TEA CO 2 laser

    NASA Astrophysics Data System (ADS)

    Abdul Ghani, B.

    2005-09-01

    "TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.

  7. Aviation NOx-induced CH4 effect: Fixed mixing ratio boundary conditions versus flux boundary conditions

    NASA Astrophysics Data System (ADS)

    Khodayari, Arezoo; Olsen, Seth C.; Wuebbles, Donald J.; Phoenix, Daniel B.

    2015-07-01

    Atmospheric chemistry-climate models are often used to calculate the effect of aviation NOx emissions on atmospheric ozone (O3) and methane (CH4). Due to the long (∼10 yr) atmospheric lifetime of methane, model simulations must be run for long time periods, typically for more than 40 simulation years, to reach steady-state if using CH4 emission fluxes. Because of the computational expense of such long runs, studies have traditionally used specified CH4 mixing ratio lower boundary conditions (BCs) and then applied a simple parameterization based on the change in CH4 lifetime between the control and NOx-perturbed simulations to estimate the change in CH4 concentration induced by NOx emissions. In this parameterization a feedback factor (typically a value of 1.4) is used to account for the feedback of CH4 concentrations on its lifetime. Modeling studies comparing simulations using CH4 surface fluxes and fixed mixing ratio BCs are used to examine the validity of this parameterization. The latest version of the Community Earth System Model (CESM), with the CAM5 atmospheric model, was used for this study. Aviation NOx emissions for 2006 were obtained from the AEDT (Aviation Environmental Design Tool) global commercial aircraft emissions. Results show a 31.4 ppb change in CH4 concentration when estimated using the parameterization and a 1.4 feedback factor, and a 28.9 ppb change when the concentration was directly calculated in the CH4 flux simulations. The model calculated value for CH4 feedback on its own lifetime agrees well with the 1.4 feedback factor. Systematic comparisons between the separate runs indicated that the parameterization technique overestimates the CH4 concentration by 8.6%. Therefore, it is concluded that the estimation technique is good to within ∼10% and decreases the computational requirements in our simulations by nearly a factor of 8.

  8. Performance of a commercial transport under typical MLS noise environment

    NASA Technical Reports Server (NTRS)

    Ho, J. K.

    1986-01-01

    The performance of a 747-200 automatic flight control system (AFCS) subjected to typical Microwave Landing System (MLS) noise is discussed. The performance is then compared with the results from a previous study which had a B747 AFCS subjected to the MLS standards and recommended practices (SARPS) maximum allowable noise. A glide slope control run with Instrument Landing System (ILS) noise is also conducted. Finally, a linear covariance analysis is presented.

  9. All Sky Search for Gravitational-Wave Bursts in the Second Joint LIGO-Virgo Run

    NASA Technical Reports Server (NTRS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Affeldt, C.; hide

    2012-01-01

    We present results from a search for gravitational-wave bursts in the data collected by the LIGO and Virgo detectors between July 7, 2009 and October 20, 2010: data are analyzed when at least two of the three LIGO-Virgo detectors are in coincident operation, with a total observation time of 207 days. The analysis searches for transients of duration approx. < 1 s over the frequency band 64-5000 Hz, without other assumptions on the signal wa.veform, polarization, direction or occurrence time. All identified events are c.onsistent with the expected accidental background. We set frequentist upper limits on the rate of gravitational-wave bursts by combining this search with the previous LIGOVirgo search on the data collected "between November 2005 and October 2007. The upper limit on the rate of strong gravita.tional-wave bursts at the Earth is 1.3 events per year at 90% confidence. We also present upper limits on source rate density per yea.r and Mpc3 for sample popula.tions of standard-candle sources. As in the previous joint run, typical sensitivities of the search in terms of the root-sum-squared strain amplitude for these waveforms lie in the range approx 5 x 10(exp -22 Hz(exp-1/2) approx 1 X 10(exp -20) Hz(exp -1/2) . The combination of the two joint runs entails the most sensitive all-sky search for generic gravitational-wave bursts and synthesizes the results achieved by the initial generation of interferometric detectors.

  10. Real-time black carbon emission factors of light-duty vehicles tested on a chassis dynamometer

    NASA Astrophysics Data System (ADS)

    Forestieri, S. D.; Cappa, C. D.; Kuwayama, T.; Collier, S.; Zhang, Q.; Kleeman, M. J.

    2012-12-01

    Eight light-duty gasoline vehicles were tested on a Chassis dynamometer using the California Unified Driving Cycle (UDC) at the Haagen-Smit vehicle test facility at the California Air Resources Board (CARB) in El Monte, CA during September 2011. In addition, one light-duty gasoline vehicle, one ultra low-emission vehicle, one diesel passenger vehicle, and one gasoline direct injection vehicle were tested on a constant velocity driving cycle. Vehicle exhaust was diluted through CARB's CVS tunnel and a secondary dilution system in order to examine particulate matter (PM) emissions at atmospherically relevant concentrations (5-30 μg-m3). A variety of real-time instrumentation was used to characterize how the major PM components vary during a typical driving cycle, which includes a cold start phase followed by a hot stabilized running phase. Aerosol absorption coefficients were obtained at 532 nm and 405 nm with a time resolution of 2 seconds from a photo-acoustic spectrometer. These absorption coefficients were then converted to black carbon (BC) concentrations via a mass absorption coefficient. Non-refractory organic and inorganic PM and CO2 concentrations were quantified with a time resolution of 10 seconds using a High-Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS). Real-time BC and CO2 concentrations allowed for the determination of BC emission factors (EFs), providing insights into the variability of BC EFs during different phases of a typical driving cycle and aiding in the modeling BC emissions.

  11. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper.

    PubMed

    Luo, Gang

    2017-12-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic.

  12. Applications of magnetostrictive materials in the real-time monitoring of vehicle suspension components

    NASA Astrophysics Data System (ADS)

    Estrada, Raul

    The purpose of this project is to explore applications of magnetostrictive materials for real-time monitoring of railroad suspension components, in particular bearings. Monitoring of such components typically requires the tracking of temperature vibration and load. In addition, real-time, long-term monitoring can be greatly facilitated through the use of wireless, self-powered sensors. Magnetostrictive materials, such as Terfenol-D, have the potential to address both requirements. Currently, piezoelectrics are used for many load and energy harvesting applications; however, they are fragile and are difficult to use for static load measurements. Magnetostrictive metals are tougher, and their property of variable permeability when stressed can be utilized to measure static loads. A prototype load sensor was successfully fabricated and characterized yielding less than 10% error under normal operating conditions. Energy harvesting experiments generated a little over 80 mW of power, which is sufficient to run low-power condition monitoring systems.

  13. An algorithm for synchronizing a clock when the data are received over a network with an unstable delay

    PubMed Central

    Levine, Judah

    2016-01-01

    A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759

  14. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper

    PubMed Central

    Luo, Gang

    2017-01-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022

  15. Radiative corrections from heavy fast-roll fields during inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S., E-mail: jain@cp3.dias.sdu.dk, E-mail: sandora@cp3.dias.sdu.dk, E-mail: sloth@cp3.dias.sdu.dk

    2015-06-01

    We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less

  16. Radiative corrections from heavy fast-roll fields during inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rajeev Kumar; Sandora, McCullen; Sloth, Martin S.

    2015-06-09

    We investigate radiative corrections to the inflaton potential from heavy fields undergoing a fast-roll phase transition. We find that a logarithmic one-loop correction to the inflaton potential involving this field can induce a temporary running of the spectral index. The induced running can be a short burst of strong running, which may be related to the observed anomalies on large scales in the cosmic microwave spectrum, or extend over many e-folds, sustaining an effectively constant running to be searched for in the future. We implement this in a general class of models, where effects are mediated through a heavy messengermore » field sitting in its minimum. Interestingly, within the present framework it is a generic outcome that a large running implies a small field model with a vanishing tensor-to-scalar ratio, circumventing the normal expectation that small field models typically lead to an unobservably small running of the spectral index. An observable level of tensor modes can also be accommodated, but, surprisingly, this requires running to be induced by a curvaton. If upcoming observations are consistent with a small tensor-to-scalar ratio as predicted by small field models of inflation, then the present study serves as an explicit example contrary to the general expectation that the running will be unobservable.« less

  17. Acute Restraint Stress Alters Wheel-Running Behavior Immediately Following Stress and up to 20 Hours Later in House Mice.

    PubMed

    Malisch, Jessica L; deWolski, Karen; Meek, Thomas H; Acosta, Wendy; Middleton, Kevin M; Crino, Ondi L; Garland, Theodore

    In vertebrates, acute stressors-although short in duration-can influence physiology and behavior over a longer time course, which might have important ramifications under natural conditions. In laboratory rats, for example, acute stress has been shown to increase anxiogenic behaviors for days after a stressor. In this study, we quantified voluntary wheel-running behavior for 22 h following a restraint stress and glucocorticoid levels 24 h postrestraint. We utilized mice from four replicate lines that have been selectively bred for high voluntary wheel-running activity (HR mice) for 60 generations and their nonselected control (C) lines to examine potential interactions between exercise propensity and sensitivity to stress. Following 6 d of wheel access on a 12L∶12D photo cycle (0700-1900 hours, as during the routine selective breeding protocol), 80 mice were physically restrained for 40 min, beginning at 1400 hours, while another 80 were left undisturbed. Relative to unrestrained mice, wheel running increased for both HR and C mice during the first hour postrestraint (P < 0.0001) but did not differ 2 or 3 h postrestraint. Wheel running was also examined at four distinct phases of the photoperiod. Running in the period of 1600-1840 hours was unaffected by restraint stress and did not differ statistically between HR and C mice. During the period of peak wheel running (1920-0140 hours), restrained mice tended to run fewer revolutions (-11%; two-tailed P = 0.0733), while HR mice ran 473% more than C (P = 0.0008), with no restraint × line type interaction. Wheel running declined for all mice in the latter part of the scotophase (0140-0600 hours), restraint had no statistical effect on wheel running, but HR again ran more than C (+467%; P = 0.0122). Finally, during the start of the photophase (0720-1200 hours), restraint increased running by an average of 53% (P = 0.0443) in both line types, but HR and C mice did not differ statistically. Mice from HR lines had statistically higher plasma corticosterone concentrations than C mice, with no statistical effect of restraint and no interaction between line type and restraint. Overall, these results indicate that acute stress can affect locomotor activity (or activity patterns) for many hours, with the most prominent effect being an increase in activity during a period of typical inactivity at the start of the photophase, 15-20 h poststressor.

  18. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    PubMed

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  19. Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk

    PubMed Central

    Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581

  20. The Role of Near-Shore Bathymetry During Tsunami Inundation in a Reef Island Setting: A Case Study of Tutuila Island

    NASA Astrophysics Data System (ADS)

    Dilmen, Derya I.; Roe, Gerard H.; Wei, Yong; Titov, Vasily V.

    2018-04-01

    On September 29, 2009 at 17:48 UTC, an M w = 8.1 earthquake in the Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga. One of the worst hits was the volcanic island of Tutuila in American Samoa. Tutuila has a typical tropical island bathymetry setting influenced by coral reefs, and so the event provided an opportunity to evaluate the relationship between tsunami dynamics and the bathymetry in that typical island environment. Previous work has come to differing conclusions regarding how coral reefs affect tsunami dynamics through their influence on bathymetry and dissipation. This study presents numerical simulations of this event with a focus on two main issues: first, how roughness variations affect tsunami run-up and whether different values of Manning's roughness parameter, n, improve the simulated run-up compared to observations; and second, how depth variations in the shelf bathymetry with coral reefs control run-up and inundation on the island coastlines they shield. We find that no single value of n provides a uniformly good match to all observations; and we find substantial bay-to-bay variations in the impact of varying n. The results suggest that there are aspects of tsunami wave dissipation which are not captured by a simplified drag formulation used in shallow-water waves model. The study also suggests that the primary impact of removing the near-shore bathymetry in coral reef environment is to reduce run-up, from which we conclude that, at least in this setting, the impact of the near-shore bathymetry is to increase run-up and inundation.

  1. The Role of Near-Shore Bathymetry During Tsunami Inundation in a Reef Island Setting: A Case Study of Tutuila Island

    NASA Astrophysics Data System (ADS)

    Dilmen, Derya I.; Roe, Gerard H.; Wei, Yong; Titov, Vasily V.

    2018-02-01

    On September 29, 2009 at 17:48 UTC, an M w = 8.1 earthquake in the Tonga Trench generated a tsunami that caused heavy damage across Samoa, American Samoa, and Tonga. One of the worst hits was the volcanic island of Tutuila in American Samoa. Tutuila has a typical tropical island bathymetry setting influenced by coral reefs, and so the event provided an opportunity to evaluate the relationship between tsunami dynamics and the bathymetry in that typical island environment. Previous work has come to differing conclusions regarding how coral reefs affect tsunami dynamics through their influence on bathymetry and dissipation. This study presents numerical simulations of this event with a focus on two main issues: first, how roughness variations affect tsunami run-up and whether different values of Manning's roughness parameter, n, improve the simulated run-up compared to observations; and second, how depth variations in the shelf bathymetry with coral reefs control run-up and inundation on the island coastlines they shield. We find that no single value of n provides a uniformly good match to all observations; and we find substantial bay-to-bay variations in the impact of varying n. The results suggest that there are aspects of tsunami wave dissipation which are not captured by a simplified drag formulation used in shallow-water waves model. The study also suggests that the primary impact of removing the near-shore bathymetry in coral reef environment is to reduce run-up, from which we conclude that, at least in this setting, the impact of the near-shore bathymetry is to increase run-up and inundation.

  2. Seismic hazard along a crude oil pipeline in the event of an 1811-1812 type New Madrid earthquake. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, H.H.M.; Chen, C.H.S.

    1990-04-16

    An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less

  3. A Typical Synergy

    NASA Astrophysics Data System (ADS)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  4. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  5. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  6. CCD scanning for asteroids and comets

    NASA Technical Reports Server (NTRS)

    Gehrels, T.; Mcmillan, R. S.

    1986-01-01

    A change coupled device (CCD) is used in a scanning mode to find new asteroids and recover known asteroids and comet nuclei. Current scientific programs include recovery of asteroids and comet nuclei requested by the Minor Planet Center (MPC), discovery of new asteroids in the main belt and of unusual orbital types, and follow-up astrometry of selected new asteroids discovered. The routine six sigma limiting visual magnitude is 19.6 and slightly more than a square degree is scanned three times every 90 minutes of observing time during the fortnight centered on New Moon. Semiautomatic software for detection of moving objects is in routine use; angular speeds as low as 11.0 arcseconds per hour were distinguished from the effects of the Earth's atmosphere on the field of view. A typical set of three 29-minute scans near the opposition point along the ecliptic typically nets at least 5 new main-belt asteroids down to magnitude 19.6. In 18 observing runs (months) 43 asteroids were recovered, astrometric and photometric data on 59 new asteroids were reported, 10 new asteroids with orbital elements were consolidated, and photometry and positions of 22 comets were reported.

  7. The CHORDS Portal: Lowering the Barrier for Internet Collection, Archival and Distribution of Real-Time Geophysical Observations

    NASA Astrophysics Data System (ADS)

    Martin, C.; Dye, M. J.; Daniels, M. D.; Keiser, K.; Maskey, M.; Graves, S. J.; Kerkez, B.; Chandrasekar, V.; Vernon, F.

    2015-12-01

    The Cloud-Hosted Real-time Data Services for the Geosciences (CHORDS) project tackles the challenges of collecting and disseminating geophysical observational data in real-time, especially for researchers with limited IT budgets and expertise. The CHORDS Portal is a component that allows research teams to easily configure and operate a cloud-based service which can receive data from dispersed instruments, manage a rolling archive of the observations, and serve these data to any client on the Internet. The research group (user) creates a CHORDS portal simply by running a prepackaged "CHORDS appliance" on Amazon Web Services. The user has complete ownership and management of the portal. Computing expenses are typically very small. RESTful protocols are employed for delivering and fetching data from the portal, which means that any system capable of sending an HTTP GET message is capable of accessing the portal. A simple API is defined, making it straightforward for non-experts to integrate a diverse collection of field instruments. Languages with network access libraries, such as Python, sh, Matlab, R, IDL, Ruby and JavaScript (and most others) can retrieve structured data from the portal with just a few lines of code. The user's private portal provides a browser-based system for configuring, managing and monitoring the health of the integrated real-time system. This talk will highlight the design goals, architecture and agile development of the CHORDS Portal. A running portal, with operational data feeds from across the country, will be presented.

  8. Multivariable control of a rapid thermal processor using ultrasonic sensors

    NASA Astrophysics Data System (ADS)

    Dankoski, Paul C. P.

    The semiconductor manufacturing industry faces the need for tighter control of thermal budget and process variations as circuit feature sizes decrease. Strategies to meet this need include supervisory control, run-to-run control, and real-time feedback control. Typically, the level of control chosen depends upon the actuation and sensing available. Rapid Thermal Processing (RTP) is one step of the manufacturing cycle requiring precise temperature control and hence real-time feedback control. At the outset of this research, the primary ingredient lacking from in-situ RTP temperature control was a suitable sensor. This research looks at an alternative to the traditional approach of pyrometry, which is limited by the unknown and possibly time-varying wafer emissivity. The technique is based upon the temperature dependence of the propagation time of an acoustic wave in the wafer. The aim of this thesis is to evaluate the ultrasonic sensors as a potentially viable sensor for control in RTP. To do this, an experimental implementation was developed at the Center for Integrated Systems. Because of the difficulty in applying a known temperature standard in an RTP environment, calibration to absolute temperature is nontrivial. Given reference propagation delays, multivariable model-based feedback control is applied to the system. The modelling and implementation details are described. The control techniques have been applied to a number of research processes including rapid thermal annealing and rapid thermal crystallization of thin silicon films on quartz/glass substrates.

  9. MC-TESTER: a universal tool for comparisons of Monte Carlo predictions for particle decays in high energy physics

    NASA Astrophysics Data System (ADS)

    Golonka, P.; Pierzchała, T.; Waş, Z.

    2004-02-01

    Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.

  10. Gender difference and age-related changes in performance at the long-distance duathlon.

    PubMed

    Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver

    2013-02-01

    The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.

  11. Limitations imposed by wearing armour on Medieval soldiers' locomotor performance.

    PubMed

    Askew, Graham N; Formenti, Federico; Minetti, Alberto E

    2012-02-22

    In Medieval Europe, soldiers wore steel plate armour for protection during warfare. Armour design reflected a trade-off between protection and mobility it offered the wearer. By the fifteenth century, a typical suit of field armour weighed between 30 and 50 kg and was distributed over the entire body. How much wearing armour affected Medieval soldiers' locomotor energetics and biomechanics is unknown. We investigated the mechanics and the energetic cost of locomotion in armour, and determined the effects on physical performance. We found that the net cost of locomotion (C(met)) during armoured walking and running is much more energetically expensive than unloaded locomotion. C(met) for locomotion in armour was 2.1-2.3 times higher for walking, and 1.9 times higher for running when compared with C(met) for unloaded locomotion at the same speed. An important component of the increased energy use results from the extra force that must be generated to support the additional mass. However, the energetic cost of locomotion in armour was also much higher than equivalent trunk loading. This additional cost is mostly explained by the increased energy required to swing the limbs and impaired breathing. Our findings can predict age-associated decline in Medieval soldiers' physical performance, and have potential implications in understanding the outcomes of past European military battles.

  12. Limitations imposed by wearing armour on Medieval soldiers' locomotor performance

    PubMed Central

    Askew, Graham N.; Formenti, Federico; Minetti, Alberto E.

    2012-01-01

    In Medieval Europe, soldiers wore steel plate armour for protection during warfare. Armour design reflected a trade-off between protection and mobility it offered the wearer. By the fifteenth century, a typical suit of field armour weighed between 30 and 50 kg and was distributed over the entire body. How much wearing armour affected Medieval soldiers' locomotor energetics and biomechanics is unknown. We investigated the mechanics and the energetic cost of locomotion in armour, and determined the effects on physical performance. We found that the net cost of locomotion (Cmet) during armoured walking and running is much more energetically expensive than unloaded locomotion. Cmet for locomotion in armour was 2.1–2.3 times higher for walking, and 1.9 times higher for running when compared with Cmet for unloaded locomotion at the same speed. An important component of the increased energy use results from the extra force that must be generated to support the additional mass. However, the energetic cost of locomotion in armour was also much higher than equivalent trunk loading. This additional cost is mostly explained by the increased energy required to swing the limbs and impaired breathing. Our findings can predict age-associated decline in Medieval soldiers' physical performance, and have potential implications in understanding the outcomes of past European military battles. PMID:21775328

  13. Midsole material-related force control during heel-toe running.

    PubMed

    Kersting, Uwe G; Brüggemann, Gert-Peter

    2006-01-01

    The impact maximum and rearfoot eversion have been used as indicators of load on internal structures in running. The midsole hardness of a typical running shoe was varied systematically to determine the relationship between external ground reaction force (GRF), in-shoe force, and kinematic variables. Eight subjects were tested during overground running at 4 m/s. Rearfoot movement as well as in-shoe forces and external GRF varied nonsystematically with midsole hardness. Kinematic parameters such as knee flexion and foot velocity at touchdown (TD), also varied nonsystematically with altered midsole hardness. Results demonstrate that considerable variations of in-shoe loading occur that were not depicted by external GRF measurements alone. Individuals apparently use different strategies of mechanical and neuromuscular adaptation in response to footwear modifications. In conclusion, shoe design effects on impact forces or other factors relating to injuries depend on the individual and therefore cannot be generalized.

  14. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  15. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  16. Metamodels for Ozone - Comparison of Two Techniques

    EPA Science Inventory

    A metamodel is a mathematical relationship between the inputs and outputs of a simulation experiment, permitting calculation of outputs for scenarios of interest without having to run new (presumably costly) experiments. Ozone metamodels are typically designed to capture a parti...

  17. Using a dynamic model to assess trends in land degradation by water erosion in Spanish Rangelands

    NASA Astrophysics Data System (ADS)

    Ibáñez, Javier; Francisco Lavado-Contador, Joaquín; Schnabel, Susanne; Pulido-Fernández, Manuel; Martínez Valderrama, Jaime

    2014-05-01

    This work presents a model aimed at evaluating land degradation by water erosion in dehesas and montados of the Iberian Peninsula, that constitute valuable rangelands in the area. A multidisciplinary dynamic model was built including weather, biophysical and economic variables that reflect the main causes and processes affecting sheet erosion on hillsides of the study areas. The model has two main and two derived purposes: Purpose 1: Assessing the risk of degradation that a land-use system is running. Derived purpose 1: Early warning about land-use systems that are particularly threatened by degradation. Purpose 2: Assessing the degree to which different factors would hasten degradation if they changed from the typical values they show at present. Derived purpose 2: Evaluating the role of human activities on degradation. Model variables and parameters have been calibrated for a typical open woodland rangeland (dehesa or montado) defined along 22 working units selected from 10 representative farms and distributed throughout the Spanish region of Extremadura. The model is the basis for a straightforward assessment methodology which is summarized by the three following points: i) The risk of losing a given amount of soil before a given number of years was specifically estimated as the percentage of 1000 simulations where such a loss occurs, being the simulations run under randomly-generated scenarios of rainfall amount and intensity and meat and supplemental feed market prices; ii) Statistics about the length of time that a given amount of soil takes to be lost were calculated over 1000 stochastic simulations run until year 1000, thereby ensuring that such amount of soil has been lost in all of the simulations, i.e. the total risk is 100%; iii) Exogenous factors potentially affecting degradation, mainly climatic and economic, were ranked in order of importance by means of a sensitivity analysis. Particularly remarkable in terms of model performance is the major role played in our case study by two positive feedback loops in which the erosion rate is involved. Those loops are responsible for erosion to accelerate over time, thereby outweighing the effect of negative feedbacks also involved in the erosion rate. The estimated lengths of time to loss the upper 5, 10, 15 and 20 cm of the soil (with and initial depth of 23.4 cm) corresponds to 138, 245, 317 and 360 years, respectively. The importance of climatic factors on soil removal considerably exceeds that of the economic ones, which showed low impacts on the final model results.

  18. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less

  19. High-speed thermal cycling system and method of use

    DOEpatents

    Hansen, A.D.A.; Jaklevic, J.M.

    1996-04-16

    A thermal cycling system and method of use are described. The thermal cycling system is based on the circulation of temperature-controlled water directly to the underside of thin-walled polycarbonate plates. The water flow is selected from a manifold fed by pumps from heated reservoirs. The plate wells are loaded with typically 15-20 microliters of reagent mix for the PCR process. Heat transfer through the thin polycarbonate is sufficiently rapid that the contents reach thermal equilibrium with the water in less than 15 seconds. Complete PCR amplification runs of 40 three-step cycles have been performed in as little as 14.5 minutes, with the results showing substantially enhanced specificity compared to conventional technology requiring run times in excess of 100 minutes. The plate clamping station is designed to be amenable to robotic loading and unloading of the system. It includes a heated lid, thus eliminating the need for mineral oil overlay of the reactants. The present system includes three or more plate holder stations, fed from common reservoirs but operating with independent switching cycles. The system can be modularly expanded. 13 figs.

  20. High-speed thermal cycling system and method of use

    DOEpatents

    Hansen, Anthony D. A.; Jaklevic, Joseph M.

    1996-01-01

    A thermal cycling system and method of use are described. The thermal cycling system is based on the-circulation of temperature-controlled water directly to the underside of thin-walled polycarbonate microtiter plates. The water flow is selected from a manifold fed by pumps from heated reservoirs. The plate wells are loaded with typically 15-20 .mu.l of reagent mix for the PCR process. Heat transfer through the thin polycarbonate is sufficiently rapid that the contents reach thermal equilibrium with the water in less than 15 seconds. Complete PCR amplification runs of 40 three-step cycles have been performed in as little as 14.5 minutes, with the results showing substantially enhanced specificity compared to conventional technology requiring run times in excess of 100 minutes. The plate clamping station is designed to be amenable to robotic loading and unloading of the system. It includes a heated lid, thus eliminating the need for mineral oil overlay of the reactants. The present system includes three or more plate holder stations, fed from common reservoirs but operating with independent switching cycles. The system can be modularly expanded.

  1. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.

    PubMed

    Barnhoorn, Jonathan S; Haasnoot, Erwin; Bocanegra, Bruno R; van Steenbergen, Henk

    2015-12-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times.

  2. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  3. Mean platelet volume (MPV) predicts middle distance running performance.

    PubMed

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running performance. The significant association between baseline MPV and running time suggest that hyperactive platelets may exert some pleiotropic effects on endurance performance.

  4. Access and use of the GUDMAP database of genitourinary development.

    PubMed

    Davies, Jamie A; Little, Melissa H; Aronow, Bruce; Armstrong, Jane; Brennan, Jane; Lloyd-MacGilp, Sue; Armit, Chris; Harding, Simon; Piu, Xinjun; Roochun, Yogmatee; Haggarty, Bernard; Houghton, Derek; Davidson, Duncan; Baldock, Richard

    2012-01-01

    The Genitourinary Development Molecular Atlas Project (GUDMAP) aims to document gene expression across time and space in the developing urogenital system of the mouse, and to provide access to a variety of relevant practical and educational resources. Data come from microarray gene expression profiling (from laser-dissected and FACS-sorted samples) and in situ hybridization at both low (whole-mount) and high (section) resolutions. Data are annotated to a published, high-resolution anatomical ontology and can be accessed using a variety of search interfaces. Here, we explain how to run typical queries on the database, by gene or anatomical location, how to view data, how to perform complex queries, and how to submit data.

  5. Computations on Wings With Full-Span Oscillating Control Surfaces Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2013-01-01

    A dual-level parallel procedure is presented for computing large databases to support aerospace vehicle design. This procedure has been developed as a single Unix script within the Parallel Batch Submission environment utilizing MPIexec and runs MPI based analysis software. It has been developed to provide a process for aerospace designers to generate data for large numbers of cases with the highest possible fidelity and reasonable wall clock time. A single job submission environment has been created to avoid keeping track of multiple jobs and the associated system administration overhead. The process has been demonstrated for computing large databases for the design of typical aerospace configurations, a launch vehicle and a rotorcraft.

  6. Ligand-protein docking using a quantum stochastic tunneling optimization method.

    PubMed

    Mancera, Ricardo L; Källblad, Per; Todorov, Nikolay P

    2004-04-30

    A novel hybrid optimization method called quantum stochastic tunneling has been recently introduced. Here, we report its implementation within a new docking program called EasyDock and a validation with the CCDC/Astex data set of ligand-protein complexes using the PLP score to represent the ligand-protein potential energy surface and ScreenScore to score the ligand-protein binding energies. When taking the top energy-ranked ligand binding mode pose, we were able to predict the correct crystallographic ligand binding mode in up to 75% of the cases. By using this novel optimization method run times for typical docking simulations are significantly shortened. Copyright 2004 Wiley Periodicals, Inc. J Comput Chem 25: 858-864, 2004

  7. Performance of alumina-supported Pt catalysts in an electron-beam-sustained CO2 laser amplifier

    NASA Technical Reports Server (NTRS)

    Cunningham, D. L.; Jones, P. L.; Miyake, C. I.; Moody, S. E.

    1990-01-01

    The performance of an alumina-supported Pt catalyst system used to maintain the gas purity in an electron-beam-sustained (636) isotope CO2 laser amplifier has been tested. The system characteristics using the two-zone, parallel flow reactor were determined for both continuous- and end-of-day reactor operation using on-line mass spectrometric sampling. The laser amplifier was run with an energy loading of typically 110 J-l/atm and an electron-beam current of 4 mA/sq cm. With these conditions and a pulse repetition frequency of 10 Hz for up to 10,000 shots, increases on the order of 100 ppm O2 were observed with the purifier on and 150 ppm with it off. The 1/e time recovery time was found to be approximately 75 minutes.

  8. From bed topography to ice thickness: GlaRe, a GIS tool to reconstruct the surface of palaeoglaciers

    NASA Astrophysics Data System (ADS)

    Pellitero, Ramon; Rea, Brice; Spagnolo, Matteo; Bakke, Jostein; Ivy-Ochs, Susan; Frew, Craig; Hughes, Philip; Ribolini, Adriano; Renssen, Hans; Lukas, Sven

    2016-04-01

    We present GlaRe, A GIS tool that automatically reconstructs the 3D geometry for palaeoglaciers given the bed topography. This tool utilises a numerical approach and can work using a minimum of morphological evidence i.e. the position of the palaeoglacier front. The numerical approach is based on an iterative solution to the perfect plasticity assumption for ice rheology, explained in Benn and Hulton (2010). The tool can be run in ArcGIS 10.1 (ArcInfo license) and later updates and the toolset is written in Python code. The GlaRe toolbox presented in this paper implements a well-established approach for the determination of palaeoglacier equilibrium profiles. Significantly it permits users to quickly run multiple glacier reconstructions which were previously very laborious and time consuming (typically days for a single valley glacier). The implementation of GlaRe will facilitate the reconstruction of large numbers of palaeoglaciers which will provide opportunities for such research addressing at least two fundamental problems: 1. Investigation of the dynamics of palaeoglaciers. Glacier reconstructions are often based on a rigorous interpretation of glacial landforms but not always sufficient attention and/or time has been given to the actual reconstruction of the glacier surface, which is crucial for the calculation of palaeoglacier ELAs and subsequent derivation of quantitative palaeoclimatic data. 2. the ability to run large numbers of reconstructions and over much larger spatial areas provides an opportunity to undertake palaeoglaciers reconstructions across entire mountain, ranges, regions or even continents, allowing climatic gradients and atmospheric circulation patterns to be elucidated. The tool performance has been evaluated by comparing two extant glaciers, an icefield and a cirque/valley glacier from which the subglacial topography is known with a basic reconstruction using GlaRe. Results from the comparisons between extant glacier surfaces and modelled ones show very similar ELA values on the order of 10-20 meter error (which would account for a 0.065-0.13 K degree variation on a typical -6.5 K altitudinal gradient), and these can be improved further by increasing the number of flowlines and using F factors where needed. GlaRe is able to quickly generate robust palaeoglacier surfaces based on the very limited inputs often available from the geomorphological record.

  9. MerMade: An Oligodeoxyribonucleotide Synthesizer for High Throughput Oligonucleotide Production in Dual 96-Well Plates

    PubMed Central

    Rayner, Simon; Brignac, Stafford; Bumeister, Ron; Belosludtsev, Yuri; Ward, Travis; Grant, O’dell; O’Brien, Kevin; Evans, Glen A.; Garner, Harold R.

    1998-01-01

    We have designed and constructed a machine that synthesizes two standard 96-well plates of oligonucleotides in a single run using standard phosphoramidite chemistry. The machine is capable of making a combination of standard, degenerate, or modified oligos in a single plate. The run time is typically 17 hr for two plates of 20-mers and a reaction scale of 40 nm. The reaction vessel is a standard polypropylene 96-well plate with a hole drilled in the bottom of each well. The two plates are placed in separate vacuum chucks and mounted on an xy table. Each well in turn is positioned under the appropriate reagent injection line and the reagent is injected by switching a dedicated valve. All aspects of machine operation are controlled by a Macintosh computer, which also guides the user through the startup and shutdown procedures, provides a continuous update on the status of the run, and facilitates a number of service procedures that need to be carried out periodically. Over 25,000 oligos have been synthesized for use in dye terminator sequencing reactions, polymerase chain reactions (PCRs), hybridization, and RT–PCR. Oligos up to 100 bases in length have been made with a coupling efficiency in excess of 99%. These machines, working in conjunction with our oligo prediction code are particularly well suited to application in automated high throughput genomic sequencing. PMID:9685322

  10. MerMade: an oligodeoxyribonucleotide synthesizer for high throughput oligonucleotide production in dual 96-well plates.

    PubMed

    Rayner, S; Brignac, S; Bumeister, R; Belosludtsev, Y; Ward, T; Grant, O; O'Brien, K; Evans, G A; Garner, H R

    1998-07-01

    We have designed and constructed a machine that synthesizes two standard 96-well plates of oligonucleotides in a single run using standard phosphoramidite chemistry. The machine is capable of making a combination of standard, degenerate, or modified oligos in a single plate. The run time is typically 17 hr for two plates of 20-mers and a reaction scale of 40 nM. The reaction vessel is a standard polypropylene 96-well plate with a hole drilled in the bottom of each well. The two plates are placed in separate vacuum chucks and mounted on an xy table. Each well in turn is positioned under the appropriate reagent injection line and the reagent is injected by switching a dedicated valve. All aspects of machine operation are controlled by a Macintosh computer, which also guides the user through the startup and shutdown procedures, provides a continuous update on the status of the run, and facilitates a number of service procedures that need to be carried out periodically. Over 25,000 oligos have been synthesized for use in dye terminator sequencing reactions, polymerase chain reactions (PCRs), hybridization, and RT-PCR. Oligos up to 100 bases in length have been made with a coupling efficiency in excess of 99%. These machines, working in conjunction with our oligo prediction code are particularly well suited to application in automated high throughput genomic sequencing.

  11. Plantar loading changes with alterations in foot strike patterns during a single session in habitual rear foot strike female runners.

    PubMed

    Kernozek, Thomas W; Vannatta, Charles N; Gheidi, Naghmeh; Kraus, Sydnie; Aminaka, Naoko

    2016-03-01

    Characterize plantar loading parameters when habitually rear foot strike (RFS) runners change their pattern to a non-rear foot strike (NRFS). Experimental. University biomechanics laboratory. Twenty three healthy female runners (Age: 22.17 ± 1.64 yrs; Height: 168.91 ± 5.46 cm; Mass: 64.29 ± 7.11 kg). Plantar loading was measured using an in-sole pressure sensor while running down a 20-m runway restricted to a range of 3.52-3.89 m/s under two conditions, using the runner's typical RFS, and an adapted NRFS pattern. Repeated measures multivariate analysis of variance was performed to detect differences in loading between these two conditions. Force and pressure variables were greater in the forefoot and phalanx in NRFS and greater in the heel and mid foot in RFS pattern, but the total force imposed upon the whole foot and contact time remained similar between conditions. Total peak pressure was higher and contact area was lower during NRFS running. The primary finding of this investigation is that there are distinctly different plantar loads when changing from a RFS to NRFS during running. So, during a transition from RFS to a NRFS pattern; a period of acclimation should be considered to allow for adaptations to these novel loads incurred on plantar regions of the foot. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Putting on a clinic in Va. Carilion, a not-for-profit hospital system based in Roanoke, is taking a $100 million risk to become a physician-run venture.

    PubMed

    Evans, Melanie

    2006-06-26

    Carilion Health System needs to change or die, according to its leaders, so the Roanoke, Va., organization is converting from a typical not-for-profit system into a physician-run clinic. The switch is an extreme version of an industrywide push to employ doctors. James Thweatt Jr., left, of rival Lewis-Gale, says his hospital joined the trend when it hired 80 specialists from a failing local clinic.

  13. Modifications to a Laboratory-Scale Confined Laser Ignition Chamber for Pressure Measurements to 70 MPa

    DTIC Science & Technology

    2017-09-01

    which is then turned over and pressed by hand against a flat surface. The solder ring is removed from the flange using a flat blade mini screwdriver...densities of M10 propellant. This series of experiments was conducted to get an indication of how many experiments could be run with the same window...While we were able to run 6 experiments with a pair of solder rings as the window seat, we typically replace the seat after 4 experiments have been

  14. A new version of a computer program for dynamical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej; Skrobas, Kazimierz

    2006-01-01

    We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form the code is an object-oriented extension of previous version [2]. Fig. 1 shows the static structure of classes and their possible relationships (i.e. inheritance, association, aggregation and dependency) in the code. The code has been modified and optimized to compile under the C++ Builder integrated development environment (IDE). A graphical user interface (GUI) for the program has been created. The application is a standard multiple document interface (MDI) project from Builder's object repository. The MDI application spawns child window that reside within the client window; the main form contains child object. We have added an original graphical component [3] which has been tested successfully in the C++ Builder programming environment under Microsoft Windows platform. Fig. 2 shows internal structure of the component. This diagram is a graphic presentation of the static view which shows a collection of declarative model elements, such as classes, types, and their relationships. Each of the model elements shown in Fig. 2 is manifested by one header file Graph2D.h, and one code file Graph2D.cpp. Fig. 3 sets the stage by showing the package which supplies the C++ Builder elements used in the component. Installation instructions of the TGraph2D.bpk package can be found in the new distribution. The program has been constructed according to the systems development live cycle (SDLC) methodology [4]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project RHEEDGr.bpr with associated files, and should be compiled using Borland C++ Builder compilers version 5 or later.

  15. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  16. Humanoid Robotics: Real-Time Object Oriented Programming

    NASA Technical Reports Server (NTRS)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  17. Key technology research of HILS based on real-time operating system

    NASA Astrophysics Data System (ADS)

    Wang, Fankai; Lu, Huiming; Liu, Che

    2018-03-01

    In order to solve the problems that the long development cycle of traditional simulation and digital simulation doesn't have the characteristics of real time, this paper designed a HILS(Hardware In the Loop Simulation) system based on the real-time operating platform xPC. This system solved the communication problems between HMI and Simulink models through the MATLAB engine interface, and realized the functions of system setting, offline simulation, model compiling and downloading, etc. Using xPC application interface and integrating the TeeChart ActiveX chart component to realize the monitoring function of real-time target application; Each functional block in the system is encapsulated in the form of DLL, and the data interaction between modules was realized by MySQL database technology. When the HILS system runs, search the address of the online xPC target by means of the Ping command, to establish the Tcp/IP communication between the two machines. The technical effectiveness of the developed system is verified through the typical power station control system.

  18. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    PubMed

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  19. The effect of changes in health sector resources on infant mortality in the short-run and the long-run: a longitudinal econometric analysis.

    PubMed

    Farahani, Mansour; Subramanian, S V; Canning, David

    2009-06-01

    While countries with higher levels of human resources for health typically have better population health, the evidence that increases in the level of human resources for health leads to improvements in population health is limited. We use a dynamic regression model to obtain estimates of both the short-run and long-term effects of changes in physicians per capita, our measure of health system resources, on infant mortality. Using a dataset of 99 countries at 5-year intervals from 1960-2000, we estimate that increasing the number of physicians by one per 1000 population (roughly a doubling of current levels of provision) decreases the infant mortality rate by 15% within 5 years and by 45% in the long-run with half the long-run gain being achieved in 15 years. We conclude that the long-run effects of heath system resources are substantially larger than previously estimated. Our results suggest, however, that countries that have delayed action on the Millennium Development Goal of reducing infant and child mortality rate by two-thirds by 2015 (relative to 1990) may have difficulty meeting this goal even if they rapidly increase resources now.

  20. Older Runners Retain Youthful Running Economy despite Biomechanical Differences.

    PubMed

    Beck, Owen N; Kipp, Shalaya; Roby, Jaclyn M; Grabowski, Alena M; Kram, Rodger; Ortega, Justus D

    2016-04-01

    Sixty-five years of age typically marks the onset of impaired walking economy. However, running economy has not been assessed beyond the age of 65 yr. Furthermore, a critical determinant of running economy is the spring-like storage and return of elastic energy from the leg during stance, which is related to leg stiffness. Therefore, we investigated whether runners older than 65 yr retain youthful running economy and/or leg stiffness across running speeds. Fifteen young and 15 older runners ran on a force-instrumented treadmill at 2.01, 2.46, and 2.91 m·s(-1). We measured their rates of metabolic energy consumption (i.e., metabolic power), ground reaction forces, and stride kinematics. There were only small differences in running economy between young and older runners across the range of speeds. Statistically, the older runners consumed 2% to 9% less metabolic energy than the young runners across speeds (P = 0.012). Also, the leg stiffness of older runners was 10% to 20% lower than that of young runners across the range of speeds (P = 0.002), and in contrast to the younger runners, the leg stiffness of older runners decreased with speed (P < 0.001). Runners beyond 65 yr of age maintain youthful running economy despite biomechanical differences. It may be that vigorous exercise, such as running, prevents the age related deterioration of muscular efficiency and, therefore, may make everyday activities easier.

  1. ACCURACY OF SELF-REPORTED FOOT STRIKE PATTERN IN INTERCOLLEGIATE AND RECREATIONAL RUNNERS DURING SHOD RUNNING

    PubMed Central

    Bade, Michael B.; Aaron, Katie

    2016-01-01

    ABSTRACT Background Clinicians are interested in the foot strike pattern (FSP) in runners because of the suggested relationship between the strike pattern and lower extremity injury. Purpose The purpose of this study was to assess the ability of collegiate cross-country runners and recreational runners to self-report their foot strike pattern during running. Study Design Cross-sectional Study Methods Twenty-three collegiate cross-country and 23 recreational runners voluntarily consented to participate. Inclusion criteria included running at least 18 miles per week, experience running on a treadmill, no history of lower extremity congenital or traumatic deformity, or acute injury three months prior to the start of the study. All participants completed a pre-test survey to indicate their typical foot strike pattern during a training run (FSPSurvey). Prior to running, reflective markers were placed on the posterior midsole and the vamp of the running shoe. A high-speed camera was used to film each runner in standing and while running at his or her preferred speed on a treadmill. The angle between the vector formed by the two reflective markers and the superior surface of the treadmill was used to calculate the foot strike angle (FSA). To determine the foot strike pattern from the video data (FSPVideo), the static standing angle was subtracted from the FSA at initial contact of the shoe on the treadmill. In addition to descriptive statistics, percent agreement and Chi square analysis was used to determine distribution differences between the video analysis results and the survey. Results The results of the chi-square analysis on the distribution of the FSPSurvey in comparison to the FSPVideo were significantly different for both the XCRunners (p < .01; Chi-square = 8.77) and the REC Runners (p < .0002; Chi-square = 16.70). The cross-country and recreational runners could correctly self-identified their foot strike pattern 56.5% and 43.5% of the time, respectively. Conclusion The findings of this study suggest that the clinician cannot depend on an experienced runner to correctly self-identify their FSP. Clinicians interested in knowing the FSP of a runner should consider performing the two-dimensional video analysis described in this paper. Level of Evidence 3 PMID:27274421

  2. Variation in Foot Strike Patterns during Running among Habitually Barefoot Populations

    PubMed Central

    Hatala, Kevin G.; Dingwall, Heather L.; Wunderlich, Roshna E.; Richmond, Brian G.

    2013-01-01

    Endurance running may have a long evolutionary history in the hominin clade but it was not until very recently that humans ran wearing shoes. Research on modern habitually unshod runners has suggested that they utilize a different biomechanical strategy than runners who wear shoes, namely that barefoot runners typically use a forefoot strike in order to avoid generating the high impact forces that would be experienced if they were to strike the ground with their heels first. This finding suggests that our habitually unshod ancestors may have run in a similar way. However, this research was conducted on a single population and we know little about variation in running form among habitually barefoot people, including the effects of running speed, which has been shown to affect strike patterns in shod runners. Here, we present the results of our investigation into the selection of running foot strike patterns among another modern habitually unshod group, the Daasanach of northern Kenya. Data were collected from 38 consenting adults as they ran along a trackway with a plantar pressure pad placed midway along its length. Subjects ran at self-selected endurance running and sprinting speeds. Our data support the hypothesis that a forefoot strike reduces the magnitude of impact loading, but the majority of subjects instead used a rearfoot strike at endurance running speeds. Their percentages of midfoot and forefoot strikes increased significantly with speed. These results indicate that not all habitually barefoot people prefer running with a forefoot strike, and suggest that other factors such as running speed, training level, substrate mechanical properties, running distance, and running frequency, influence the selection of foot strike patterns. PMID:23326341

  3. Variation in foot strike patterns during running among habitually barefoot populations.

    PubMed

    Hatala, Kevin G; Dingwall, Heather L; Wunderlich, Roshna E; Richmond, Brian G

    2013-01-01

    Endurance running may have a long evolutionary history in the hominin clade but it was not until very recently that humans ran wearing shoes. Research on modern habitually unshod runners has suggested that they utilize a different biomechanical strategy than runners who wear shoes, namely that barefoot runners typically use a forefoot strike in order to avoid generating the high impact forces that would be experienced if they were to strike the ground with their heels first. This finding suggests that our habitually unshod ancestors may have run in a similar way. However, this research was conducted on a single population and we know little about variation in running form among habitually barefoot people, including the effects of running speed, which has been shown to affect strike patterns in shod runners. Here, we present the results of our investigation into the selection of running foot strike patterns among another modern habitually unshod group, the Daasanach of northern Kenya. Data were collected from 38 consenting adults as they ran along a trackway with a plantar pressure pad placed midway along its length. Subjects ran at self-selected endurance running and sprinting speeds. Our data support the hypothesis that a forefoot strike reduces the magnitude of impact loading, but the majority of subjects instead used a rearfoot strike at endurance running speeds. Their percentages of midfoot and forefoot strikes increased significantly with speed. These results indicate that not all habitually barefoot people prefer running with a forefoot strike, and suggest that other factors such as running speed, training level, substrate mechanical properties, running distance, and running frequency, influence the selection of foot strike patterns.

  4. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    PubMed

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  5. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  6. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, Richard

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  7. Tandem steerable running gear

    NASA Technical Reports Server (NTRS)

    Fincannon, O. J.; Glenn, D. L.

    1972-01-01

    Characteristics of steering assembly for vehicle designed to move large components of space flight vehicles are presented. Design makes it possible to move heavy and bulky items through narrow passageways with tight turns. Typical configuration is illustrated to show dimensions of turning radius and minimum distances involved.

  8. Optimal chemotaxis in intermittent migration of animal cells

    NASA Astrophysics Data System (ADS)

    Romanczuk, P.; Salbreux, G.

    2015-04-01

    Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.

  9. Serum S100B level increases after running but not cycling exercise.

    PubMed

    Stocchero, Cintia Mussi Alvim; Oses, Jean Pierre; Cunha, Giovani Santos; Martins, Jocelito Bijoldo; Brum, Liz Marina; Zimmer, Eduardo Rigon; Souza, Diogo Onofre; Portela, Luis Valmor; Reischak-Oliveira, Alvaro

    2014-03-01

    The objective of this study was to investigate the effect of running versus cycling exercises upon serum S100B levels and typical markers of skeletal muscle damage such as creatine kinase (CK), aspartate aminotransferase (AST) and myoglobin (Mb). Although recent work demonstrates that S100B is highly expressed and exerts functional properties in skeletal muscle, there is no previous study that tries to establish a relationship between muscle damage and serum S100B levels after exercise. We conducted a cross-sectional study on 13 male triathletes. They completed 2 submaximal exercise protocols at anaerobic threshold intensity. Running was performed on a treadmill with no inclination (RUN) and cycling (CYC) using a cycle-simulator. Three blood samples were taken before (PRE), immediately after (POST) and 1 h after exercise for CK, AST, Mb and S100B assessments. We found a significant increase in serum S100B levels and muscle damage markers in RUN POST compared with RUN PRE. Comparing groups, POST S100B, CK, AST and Mb serum levels were higher in RUN than CYC. Only in RUN, the area under the curve (AUC) of serum S100B is positively correlated with AUC of CK and Mb. Therefore, immediately after an intense exercise such as running, but not cycling, serum levels of S100B protein increase in parallel with levels of CK, AST and Mb. Additionally, the positive correlation between S100B and CK and Mb points to S100B as an acute biomarker of muscle damage after running exercise.

  10. Algorithms for optimization of branching gravity-driven water networks

    NASA Astrophysics Data System (ADS)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  11. Sagittal plane bending moments acting on the lower leg during running.

    PubMed

    Haris Phuah, Affendi; Schache, Anthony G; Crossley, Kay M; Wrigley, Tim V; Creaby, Mark W

    2010-02-01

    Sagittal bending moments acting on the lower leg during running may play a role in tibial stress fracture development. The purpose of this study was to evaluate these moments at nine equidistant points along the length of the lower leg (10% point-90% point) during running. Kinematic and ground reaction force data were collected for 20 male runners, who each performed 10 running trials. Inverse dynamics and musculoskeletal modelling techniques were used to estimate sagittal bending moments due to reaction forces and muscle contraction. The muscle moment was typically positive during stance, except at the most proximal location (10% point) on the lower leg. The reaction moment was predominantly negative throughout stance and greater in magnitude than the muscle moment. Hence, the net sagittal bending moment acting on the lower leg was principally negative (indicating tensile loads on the posterior tibia). Peak moments typically occurred around mid-stance, and were greater in magnitude at the distal, compared with proximal, lower leg. For example, the peak reaction moment at the most distal point was -9.61+ or - 2.07%Bw.Ht., and -2.73 + or - 1.18%Bw.Ht. at the most proximal point. These data suggest that tensile loads on the posterior tibia are likely to be higher toward the distal end of the bone. This finding may explain the higher incidence of stress fracture in the distal aspect of the tibia, observed by some authors. Stress fracture susceptibility will also be influenced by bone strength and this should also be accounted for in future studies. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Prediction of Rare Transitions in Planetary Atmosphere Dynamics Between Attractors with Different Number of Zonal Jets

    NASA Astrophysics Data System (ADS)

    Bouchet, F.; Laurie, J.; Zaboronski, O.

    2012-12-01

    We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.

  13. Forensic applications of direct analysis in real time (DART) coupled to Q-orbitrap tandem mass spectrometry for the in situ analysis of pigments from paint evidence.

    PubMed

    Chen, Tai-Hung; Wu, Shu-Pao

    2017-08-01

    The accurate examination of paint fragments obtained from an accident, such as those obtained from vehicles involved in a hit-and-run case, is often critical in forensic investigations. However, organic pigments are typically minor components of automotive coatings, which makes discrimination difficult. In this study, direct analysis in real time coupled to Q-orbitrap tandem mass spectrometry (DART-MS) was employed to detect a wide range of common organic pigments in vehicle paints. Twelve common organic pigments used in vehicle paints, such as red, yellow, orange, and purple, were tested, and a database was constructed for future examinations of vehicle paint. Two hit-and-run vehicle accident cases, which occurred in New Taipei City, were investigated by Fourier transform infrared (FTIR) spectroscopy and DART-MS. First, FTIR spectroscopy was employed to study the paint samples as a preliminary screening step. Most of the observed IR peaks were attributed to binder and extenders present in paints. The IR peaks corresponding to the organic pigments were found to be weak and overlapped with those corresponding to resins. On the other hand, DART-MS successfully characterized the organic pigments. DART-MS was found to be excellent for rapidly determining the presence of organic pigments in paint samples without the need for a complicated pre-treatment process or lengthy analysis time. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Run-off regime of the small rivers in mountain landscapes (on an example of the mountain "Mongun-taiga

    NASA Astrophysics Data System (ADS)

    Pryahina, G.; Zelepukina, E.; Guzel, N.

    2012-04-01

    Hydrological characteristics calculations of the small mountain rivers in the basins with glaciers frequently cause complexity in connection with absence of standard hydrological supervision within remote mountain territories. The unique way of the actual information reception on a water mode of such rivers is field work. The rivers of the mountain Mongun-taiga located on a joint of Altai and Sayan mountains became hydrological researches objects of Russian geographical society complex expeditions in 2010-2011. The Mongun-taiga cluster of international biosphere reserve "Ubsunurskaya hollow" causes heightened interest of researchers — geographers for many years. The original landscape map in scale 1:100000 has been made, hydrological supervision on the rivers East Mugur and ugur, belonging inland basin of Internal Asia are lead. Supervision over the river drain East Mugur runoff were spent in profile of glacier tongue (the freezing area - 22 % (3.2 km2) from the reception basin) and in the closing alignment of the river located on distance of 3,4 km below tongue of glacier. During researches following results have been received. During the ablation period diurnal fluctuations with a strongly shown maximum and minimum of water discharges are typically for the small rivers with considerable share of a glacial food. The run-off maximum from the glacier takes place from 2 to 7 p.m., the run-off minimum is observed early in the morning. High speed of thawed snow running-off from glacier tongue and rather small volume of dynamic stocks water on an ice surface lead to growth of water discharge. In the bottom profile the time of maximum and minimum of water discharge is displaced on the average 2 hours, it depends of the water travel time. Maximum glacial run-off discharge (1.12 m3/s) in the upper profile was registered on July 16 (it was not rain). Volumes of daily runoff in the upper and bottom profiles were 60700-67600 m3 that day. The run-off from nonglacial part of the basin is formed by underground waters and melting snowfields, during the absence of rainfall period the part of one amounted to 10% of the run-off in the lower profile. We suggest that this water discharge corresponds to base flow value in the lower profile because the area of snowfields of the basin was < 0.1 km2 that year. Run-off monitoring has showed that rivers with a small glacial food are characterized by absence of diurnal balance of runoff. During rainfall the water content of river has being increased due to substantial derivation of basin and, as a result, fast flowing rain water into bed of river. The sharp decrease in water content of river during periods of rainfall absence indicates low inventory of soil and groundwater and the low rate of glacial. Thus, glaciers and character of the relief influence the formation of run-off small mountain rivers. Results of researches will be used for mathematical modeling mountain rivers run-off.

  15. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  16. Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.

    PubMed

    Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M

    2015-05-01

    Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.

  17. JaxoDraw: A graphical user interface for drawing Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Binosi, D.; Theußl, L.

    2004-08-01

    JaxoDraw is a Feynman graph plotting tool written in Java. It has a complete graphical user interface that allows all actions to be carried out via mouse click-and-drag operations in a WYSIWYG fashion. Graphs may be exported to postscript/EPS format and can be saved in XML files to be used for later sessions. One of JaxoDraw's main features is the possibility to create ? code that may be used to generate graphics output, thus combining the powers of ? with those of a modern day drawing program. With JaxoDraw it becomes possible to draw even complicated Feynman diagrams with just a few mouse clicks, without the knowledge of any programming language. Program summaryTitle of program: JaxoDraw Catalogue identifier: ADUA Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUA Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar gzip file Operating system: Any Java-enabled platform, tested on Linux, Windows ME, XP, Mac OS X Programming language used: Java License: GPL Nature of problem: Existing methods for drawing Feynman diagrams usually require some 'hard-coding' in one or the other programming or scripting language. It is not very convenient and often time consuming, to generate relatively simple diagrams. Method of solution: A program is provided that allows for the interactive drawing of Feynman diagrams with a graphical user interface. The program is easy to learn and use, produces high quality output in several formats and runs on any operating system where a Java Runtime Environment is available. Number of bytes in distributed program, including test data: 2 117 863 Number of lines in distributed program, including test data: 60 000 Restrictions: Certain operations (like internal latex compilation, Postscript preview) require the execution of external commands that might not work on untested operating systems. Typical running time: As an interactive program, the running time depends on the complexity of the diagram to be drawn.

  18. 16 CFR 803.10 - Running of time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  19. Moving Museum Experiences

    ERIC Educational Resources Information Center

    Weisberg, Shelley Kruger

    2011-01-01

    As Howard Gardner persuasively argued, movement, or kinesthetics, can be a powerful educational tool and one to which some learners are particularly attuned. Museums, however, are typically places that discourage movement (don't run, don't jump, watch out for the artifacts). This makes incorporating kinesthetic learning challenging. This article…

  20. Nutrition behaviors, perceptions, and beliefs of recent marathon finishers.

    PubMed

    Wilson, Patrick B

    2016-09-01

    To describe the nutrition behaviors, perceptions, and beliefs of marathoners. A survey-based study was conducted with 422 recent marathon finishers (199 men, 223 women). Participants reported their running background, demographics, diets followed, supplements used, and food/fluid intake during their most recent marathon (median 7 days prior), as well as beliefs about hydration, fueling, and sources of nutrition information. Median finishing times were 3:53 (3:26-4:35) and 4:25 (3:50-4:59) h:min for men and women during their most recent marathon. Most participants (66.1%) reported typically following a moderate-carbohydrate, moderate-fat diet, while 66.4% carbohydrate-loaded prior to their most recent marathon. Among 139 participants following a specific diet over the past year, the most common were vegetarian/vegan/pescatarian (n = 39), Paleolithic (n = 16), gluten-free (n = 15), and low-carbohydrate (n = 12). Roughly 35% of participants took a supplement intended to improve running performance over the past month. Women were more likely to follow specific diets (39.0% vs. 26.1%), while men were more likely to recently use performance-enhancing supplements (40.2% vs. 30.0%). Most participants (68.3%) indicated they were likely or very likely to rely on a structured plan to determine fluid intake, and 75% were confident in their ability to hydrate. At least 35.6% of participants thought they could improve marathon performance by 8% or more with nutrition interventions. Scientific journals ranked as the most reliable source of nutrition information, while running coaches ranked as the most likely source to be utilized. Findings from this investigation, such as diets and supplements utilized by marathoners, can be used by practitioners and researchers alike to improve the dissemination of scientifically-based information on nutrition and marathon running.

  1. Altered Running Economy Directly Translates to Altered Distance-Running Performance.

    PubMed

    Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger

    2016-11-01

    Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.

  2. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    PubMed

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  3. RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems.

    PubMed

    Rodriguez Arreola, Alberto; Balsamo, Domenico; Merrett, Geoff V; Weddell, Alex S

    2018-01-10

    Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor's state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I 2 C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches.

  4. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  5. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  6. Long-Term Uptake of Phenol-Water Vapor Follows Similar Sigmoid Kinetics on Prehydrated Organic Matter- and Clay-Rich Soil Sorbents.

    PubMed

    Borisover, Mikhail; Bukhanovsky, Nadezhda; Lado, Marcos

    2017-09-19

    Typical experimental time frames allowed for equilibrating water-organic vapors with soil sorbents might lead to overlooking slow chemical reactions finally controlling a thermodynamically stable state. In this work, long-term gravimetric examination of kinetics covering about 4000 h was performed for phenol-water vapor interacting with four materials pre-equilibrated at three levels of air relative humidity (RHs 52, 73, and 92%). The four contrasting sorbents included an organic matter (OM)-rich peat soil, an OM-poor clay soil, a hydrophilic Aldrich humic acid salt, and water-insoluble leonardite. Monitoring phenol-water vapor interactions with the prehydrated sorbents, as compared with the sorbent samples in phenol-free atmosphere at the same RH, showed, for the first time, a sigmoid kinetics of phenol-induced mass uptake typical for second-order autocatalytic reactions. The apparent rate constants were similar for all the sorbents, RHs and phenol activities studied. A significant part of sorbed phenol resisted extraction, which was attributed to its abiotic oxidative coupling. Phenol uptake by peat and clay soils was also associated with a significant enhancement of water retention. The delayed development of the sigmoidal kinetics in phenol-water uptake demonstrates that long-run abiotic interactions of water-organic vapor with soil may be overlooked, based on short-term examination.

  7. FPGA Online Tracking Algorithm for the PANDA Straw Tube Tracker

    NASA Astrophysics Data System (ADS)

    Liang, Yutie; Ye, Hua; Galuska, Martin J.; Gessler, Thomas; Kuhn, Wolfgang; Lange, Jens Soren; Wagner, Milan N.; Liu, Zhen'an; Zhao, Jingzhou

    2017-06-01

    A novel FPGA based online tracking algorithm for helix track reconstruction in a solenoidal field, developed for the PANDA spectrometer, is described. Employing the Straw Tube Tracker detector with 4636 straw tubes, the algorithm includes a complex track finder, and a track fitter. Implemented in VHDL, the algorithm is tested on a Xilinx Virtex-4 FX60 FPGA chip with different types of events, at different event rates. A processing time of 7 $\\mu$s per event for an average of 6 charged tracks is obtained. The momentum resolution is about 3\\% (4\\%) for $p_t$ ($p_z$) at 1 GeV/c. Comparing to the algorithm running on a CPU chip (single core Intel Xeon E5520 at 2.26 GHz), an improvement of 3 orders of magnitude in processing time is obtained. The algorithm can handle severe overlapping of events which are typical for interaction rates above 10 MHz.

  8. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  9. A synoptic study of Sudden Phase Anomalies (SPA's) effecting VLF navigation and timing

    NASA Technical Reports Server (NTRS)

    Swanson, E. R.; Kugel, C. P.

    1973-01-01

    Sudden phase anomalies (SPA's) observed on VLF recordings are related to sudden ionospheric disturbances due to solar flares. Results are presented for SPA statistics on 500 events observed in New York during the ten year period 1961 to 1970. Signals were at 10.2kHz and 13.6kHz emitted from the OMEGA transmitters in Hawaii and Trinidad. A relationship between SPA frequency and sun spot number was observed. For sun spot number near 85, about one SPA per day will be observed somewhere in the world. SPA activity nearly vanishes during periods of low sun spot number. During years of high solar activity, phase perturbations observed near noon are dominated by SPA effects beyond the 95th percentile. The SPA's can be represented by a rapid phase run-off which is approximately linear in time, peaking in about 6 minutes, and followed by a linear recovery. Typical duration is 49 minutes.

  10. Cold-welding test environment

    NASA Technical Reports Server (NTRS)

    Wang, J. T.

    1972-01-01

    A flight test was conducted and compared with ground test data. Sixteen typical spacecraft material couples were mounted on an experimental research satellite in which a motor intermittently drove the spherical moving specimens across the faces of the fixed flat specimens in an oscillating motion. Friction coefficients were measured over a period of 14-month orbital time. Surface-to-surface sliding was found to be the controlling factor of generating friction in a vacuum environment. Friction appears to be independent of passive vacuum exposure time. Prelaunch and postlaunch tests identical to the flight test were performed in an oil-diffusion-pumped ultrahigh vacuum chamber. Only 50% of the resultant data agreed with the flight data owing to pump oil contamination. Identical ground tests were run in an ultrahigh vacuum facility and a ion-pumped vacuum chamber. The agreement (90%) between data from these tests and flight data established the adequacy of these test environments and facilities.

  11. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  12. Kinematical calculations of RHEED intensity oscillations during the growth of thin epitaxial films

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2005-08-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction (RHEED) from the molecular beam epitaxy (MBE) growing surface. The calculations are based on the use of kinematical diffraction theory. Simple mathematical models are used for the growth simulation in order to investigate the fundamental behaviors of reflectivity change during the growth of thin epitaxial films prepared using MBE. Program summaryTitle of program:GROWTH Catalogue identifier:ADVL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which is has been tested:Pentium-based PC Operating systems or monitors under which the program has been tested:Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 10 989 Number of bytes in distributed program, including test data, etc.:103 048 Nature of the physical problem:Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The simplest approach to calculating the RHEED intensity during the growth of thin epitaxial films is the kinematical diffraction theory (often called kinematical approximation), in which only a single scattering event is taken into account. The biggest advantage of this approach is that we can calculate RHEED intensity in real time. Also, the approach facilitates intuitive understanding of the growth mechanism and surface morphology [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. Method of solution:Epitaxial growth of thin films is modeled by a set of non-linear differential equations [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing; first ed., Cambridge University Press, 1989; See also: Numerical Recipes in C++, second ed., Cambridge University Press, 1992]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project Growth.dpr file and an independent Rhd.pas file and should be compiled using Object Pascal compilers, including Borland Delphi.

  13. Implementing the Gaia Astrometric Global Iterative Solution (AGIS) in Java

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Lammers, Uwe; Lindegren, Lennart; Hernandez, Jose; Hobbs, David

    2011-10-01

    This paper provides a description of the Java software framework which has been constructed to run the Astrometric Global Iterative Solution for the Gaia mission. This is the mathematical framework to provide the rigid reference frame for Gaia observations from the Gaia data itself. This process makes Gaia a self calibrated, and input catalogue independent, mission. The framework is highly distributed typically running on a cluster of machines with a database back end. All code is written in the Java language. We describe the overall architecture and some of the details of the implementation.

  14. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  15. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  16. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  17. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  18. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  19. Frontal plane kinematics of the hip during running: Are they related to hip anatomy and strength?

    PubMed

    Baggaley, Michael; Noehren, Brian; Clasey, Jody L; Shapiro, Robert; Pohl, Michael B

    2015-10-01

    Excessive hip adduction has been associated with a number of lower extremity overuse running injuries. The excessive motion has been suggested to be the result of reduced strength of the hip abductor musculature. Hip anatomical alignment has been postulated to influence hip abduction (HABD) strength and thus may impact hip adduction during running. The purpose of this study was to investigate the relationship between hip anatomy, HABD strength, and frontal plane kinematics during running. Peak isometric HABD strength, 3D lower extremity kinematics during running, femoral neck-shaft angle (NSA), and pelvis width-femur length (PW-FL) ratio were recorded for 25 female subjects. Pearson correlations (p<0.05) were performed between variables. A fair relationship was observed between femoral NSA and HABD strength (r=-0.47, p=0.02) where an increased NSA was associated with reduced HABD strength. No relationship was observed between HABD strength and hip adduction during running. None of the anatomical measurements, NSA or PW-FL, were associated with hip adduction during running. Deviations in the femoral NSA have a limited ability to influence peak isometric hip abduction strength or frontal plane hip kinematics during running. Hip abduction strength does also not appear to be linked with changes in hip kinematics. These findings in healthy individuals question whether excessive hip adduction typically seen in female runners with overuse injuries is caused by deviations in hip abduction strength or anatomical structure. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Older Runners Retain Youthful Running Economy Despite Biomechanical Differences

    PubMed Central

    Beck, Owen N.; Kipp, Shalaya; Roby, Jaclyn M.; Grabowski, Alena M.; Kram, Rodger; Ortega, Justus D.

    2015-01-01

    Purpose Sixty-five years of age typically marks the onset of impaired walking economy. However, running economy has not been assessed beyond the age of 65 years. Furthermore, a critical determinant of running economy is the spring-like storage and return of elastic energy from the leg during stance, which is related to leg stiffness. Therefore, we investigated whether runners over the age of 65 years retain youthful running economy and/or leg stiffness across running speeds. Methods Fifteen young and fifteen older runners ran on a force-instrumented treadmill at 2.01, 2.46, and 2.91 m·s−1. We measured their rates of metabolic energy consumption (i.e. metabolic power), ground reaction forces, and stride kinematics. Results There were only small differences in running economy between young and older runners across the range of speeds. Statistically, the older runners consumed 2–9% less metabolic energy than the young runners across speeds (p=0.012). Also, the leg stiffness of older runners was 10–20% lower than that of young runners across the range of speeds (p=0.002) and in contrast to the younger runners, the leg stiffness of older runners decreased with speed (p<0.001). Conclusion Runners beyond 65 years of age maintain youthful running economy despite biomechanical differences. It may be that vigorous exercise, such as running, prevents the age related deterioration of muscular efficiency, and therefore may make everyday activities easier. PMID:26587844

  1. The Energy Cost of Running with the Ball in Soccer.

    PubMed

    Piras, Alessandro; Raffi, Milena; Atmatzidis, Charalampos; Merni, Franco; Di Michele, Rocco

    2017-11-01

    Running with the ball is a soccer-specific activity frequently used by players during match play and training drills. Nevertheless, the energy cost (EC) of on-grass running with the ball has not yet been determined. The purpose of this study was therefore to assess the EC of constant-speed running with the ball, and to compare it with the EC of normal running. Eight amateur soccer players performed two 6- min runs at 10 km/h on artificial turf, respectively with and without the ball. EC was measured with indirect calorimetry and, furthermore, estimated with a method based on players' accelerations measured with a GPS receiver. The EC measured with indirect calorimetry was higher in running with the ball (4.60±0.42 J/kg/m) than in normal running (4.19±0.33 J/kg/m), with a very likely moderate difference between conditions. Instead, a likely small difference was observed between conditions for EC estimated from GPS data (4.87±0.07 vs. 4.83±0.08 J/kg/m). This study sheds light on the energy expenditure of playing soccer, providing relevant data about the EC of a typical soccer-specific activity. These findings may be a reference for coaches to precisely determine the training load in drills with the ball, such as soccer-specific circuits or small-sided games. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Effect of Minimalist Footwear on Running Efficiency

    PubMed Central

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  3. Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)

    2001-01-01

    The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.

  4. Isotemporal Substitution Paradigm for Physical Activity Epidemiology and Weight Change

    PubMed Central

    Willett, Walter C.; Hu, Frank B.; Ding, Eric L.

    2009-01-01

    For a fixed amount of time engaged in physical activity, activity choice may affect body weight differently depending partly on other activities’ displacement. Typical models used to evaluate effects of physical activity on body weight do not directly address these substitutions. An isotemporal substitution paradigm was developed as a new analytic model to study the time-substitution effects of one activity for another. In 1991–1997, the authors longitudinally examined the associations of discretionary physical activities, with varying activity displacements, with 6-year weight loss maintenance among 4,558 healthy, premenopausal US women who had previously lost >5% of their weight. Results of isotemporal substitution models indicated widely heterogeneous relations with each physical activity type (P < 0.001) depending on the displaced activities. Notably, whereas 30 minutes/day of brisk walking substituted for 30 minutes/day of jogging/running was associated with weight increase (1.57 kg, 95% confidence interval: 0.33, 2.82), brisk walking was associated with lower weight when substituted for slow walking (−1.14 kg, 95% confidence interval: −1.75, −0.53) and with even lower weight when substituted for TV watching. Similar heterogeneous relations with weight change were found for each activity type (TV watching, slow walking, brisk walking, jogging/running) when displaced by other activities across these various models. The isotemporal substitution paradigm may offer new insights for future public health recommendations. PMID:19584129

  5. Implementing the SU(2) Symmetry for the DMRG

    NASA Astrophysics Data System (ADS)

    Alvarez, Gonzalo

    2010-03-01

    In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992), Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This talk will explain how the DMRG++ codefootnotetextarXiv:0902.3185 or Computer Physics Communications 180 (2009) 1572-1578. has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries will be discussed for typical tight-binding models of strongly correlated electronic systems. The computational bottleneck of the algorithm, and the use of shared memory parallelization will also be addressed. Finally, a roadmap for future work on DMRG++ will be presented.

  6. An on-line calibration technique for improved blade by blade tip clearance measurement

    NASA Astrophysics Data System (ADS)

    Sheard, A. G.; Westerman, G. C.; Killeen, B.

    A description of a capacitance-based tip clearance measurement system which integrates a novel technique for calibrating the capacitance probe in situ is presented. The on-line calibration system allows the capacitance probe to be calibrated immediately prior to use, providing substantial operational advantages and maximizing measurement accuracy. The possible error sources when it is used in service are considered, and laboratory studies of performance to ascertain their magnitude are discussed. The 1.2-mm diameter FM capacitance probe is demonstrated to be insensitive to variations in blade tip thickness from 1.25 to 1.45 mm. Over typical compressor blading the probe's range was four times the variation in blade to blade clearance encountered in engine run components.

  7. Dynamic motion modes of high temperature superconducting maglev on a 45-m long ring test line

    NASA Astrophysics Data System (ADS)

    Lei, W. Y.; Qian, N.; Zheng, J.; Jin, L. W.; Zhang, Y.; Deng, Z. G.

    2017-10-01

    With the development of high temperature superconducting (HTS) maglev, studies on the running stability have become more and more significant to ensure the operation safety. An experimental HTS maglev vehicle was tested on a 45-m long ring test line under the speed from 4 km/h to 20 km/h. The lateral and vertical acceleration signals of each cryostat were collected by tri-axis accelerometers in real time. By analyzing the phase relationship of acceleration signals on the four cryostats, several typical motion modes of the HTS maglev vehicle, including lateral, yaw, pitch and heave motions were observed. This experimental finding is important for the next improvement of the HTS maglev system.

  8. Characterization and speciation of fine particulate matter inside the public transport buses running on bio-diesel.

    DOT National Transportation Integrated Search

    2009-09-01

    Air pollution with respect to particulate matter was investigated in Toledo, Ohio, USA, a : city of approximately 300,000, in 2009. Two study buses were selected to reflect typical : exposure conditions of passengers while traveling in the bus. Monit...

  9. TEANGA: Journal of the Irish Association for Applied Linguistics, 1979-1993.

    ERIC Educational Resources Information Center

    O'Baoill, Donall P., Ed.

    1993-01-01

    This document consists of a complete run of the thirteen issues of "TEANGA" published since its inception in 1979 through 1993. Typical article topics include: linguistic research approaches and methodology; interlanguage, language transfer, and interference; second language instruction; language testing; language variation; discussion…

  10. Learning and Parallelization Boost Constraint Search

    ERIC Educational Resources Information Center

    Yun, Xi

    2013-01-01

    Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…

  11. 5K Run: 7-Week Training Schedule for Beginners

    MedlinePlus

    ... This 5K training schedule incorporates a mix of running, walking and resting. This combination helps reduce the ... you'll gradually increase the amount of time running and reduce the amount of time walking. If ...

  12. The Effect of Person Order on Egress Time: A Simulation Model of Evacuation From a Neolithic Visitor Attraction.

    PubMed

    Stewart, Arthur; Elyan, Eyad; Isaacs, John; McEwen, Leah; Wilson, Lyn

    2017-12-01

    The aim of this study was to model the egress of visitors from a Neolithic visitor attraction. Tourism attracts increasing numbers of elderly and mobility-impaired visitors to our built-environment heritage sites. Some such sites have very limited and awkward access, were not designed for mass visitation, and may not be modifiable to facilitate disabled access. As a result, emergency evacuation planning must take cognizance of robust information, and in this study we aimed to establish the effect of visitor position on egress. Direct observation of three tours at Maeshowe, Orkney, informed typical time of able-bodied individuals and a mobility-impaired person through the 10-m access tunnel. This observation informed the design of egress and evacuation models running on the Unity gaming platform. A slow-moving person at the observed speed typically increased time to safety of 20 people by 170% and reduced the advantage offered by closer tunnel separation by 26%. Using speeds for size-specific characters of 50th, 95th, and 99th percentiles increased time to safety in emergency evacuation by 51% compared with able-bodied individuals. Larger individuals may slow egress times of a group; however, a single slow-moving mobility-impaired person exerts a greater influence on group egress, profoundly influencing those behind. Unidirectional routes in historic buildings and other visitor attractions are vulnerable to slow-moving visitors during egress. The model presented in this study is scalable, is applicable to other buildings, and can be used as part of a risk assessment and emergency evacuation plan in future work.

  13. Comparison of the UAF Ionosphere Model with Incoherent-Scatter Radar Data

    NASA Astrophysics Data System (ADS)

    McAllister, J.; Maurits, S.; Kulchitsky, A.; Watkins, B.

    2004-12-01

    The UAF Eulerian Parallel Polar Ionosphere Model (UAF EPPIM) is a first-principles three-dimensional time-dependent representation of the northern polar ionosphere (>50 degrees north latitude). The model routinely generates short-term (~2 hours) ionospheric forecasts in real-time. It may also be run in post-processing/batch mode for specific time periods, including long-term (multi-year) simulations. The model code has been extensively validated (~100k comparisons/model year) against ionosonde foF2 data during quiet and moderate solar activity in 2002-2004 with reasonable fidelity (typical relative RMS 10-20% for summer daytime, 30-50% winter nighttime). However, ionosonde data is frequently not available during geomagnetic disturbances. The objective of the work reported here is to compare model outputs with available incoherent-scatter radar data during the storm period of October-November 2003. Model accuracy is examined for this period and compared to model performance during geomagnetically quiet and moderate circumstances. Possible improvements are suggested which are likely to boost model fidelity during storm conditions.

  14. Energy-momentum conserving higher-order time integration of nonlinear dynamics of finite elastic fiber-reinforced continua

    NASA Astrophysics Data System (ADS)

    Erler, Norbert; Groß, Michael

    2015-05-01

    Since many years the relevance of fibre-reinforced polymers is steadily increasing in fields of engineering, especially in aircraft and automotive industry. Due to the high strength in fibre direction, but the possibility of lightweight construction, these composites replace more and more traditional materials as metals. Fibre-reinforced polymers are often manufactured from glass or carbon fibres as attachment parts or from steel or nylon cord as force transmission parts. Attachment parts are mostly subjected to small strains, but force transmission parts usually suffer large deformations in at least one direction. Here, a geometrically nonlinear formulation is necessary. Typical examples are helicopter rotor blades, where the fibres have the function to stabilize the structure in order to counteract large centrifugal forces. For long-run analyses of rotor blade deformations, we have to apply numerically stable time integrators for anisotropic materials. This paper presents higher-order accurate and numerically stable time stepping schemes for nonlinear elastic fibre-reinforced continua with anisotropic stress behaviour.

  15. Space and Time Partitioning with Hardware Support for Space Applications

    NASA Astrophysics Data System (ADS)

    Pinto, S.; Tavares, A.; Montenegro, S.

    2016-08-01

    Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.

  16. Discrete-time modelling of musical instruments

    NASA Astrophysics Data System (ADS)

    Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti

    2006-01-01

    This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.

  17. Effects of a minimalist shoe on running economy and 5-km running performance.

    PubMed

    Fuller, Joel T; Thewlis, Dominic; Tsiros, Margarita D; Brown, Nicholas A T; Buckley, Jonathan D

    2016-09-01

    The purpose of this study was to determine if minimalist shoes improve time trial performance of trained distance runners and if changes in running economy, shoe mass, stride length, stride rate and footfall pattern were related to any difference in performance. Twenty-six trained runners performed three 6-min sub-maximal treadmill runs at 11, 13 and 15 km·h(-1) in minimalist and conventional shoes while running economy, stride length, stride rate and footfall pattern were assessed. They then performed a 5-km time trial. In the minimalist shoe, runners completed the trial in less time (effect size 0.20 ± 0.12), were more economical during sub-maximal running (effect size 0.33 ± 0.14) and decreased stride length (effect size 0.22 ± 0.10) and increased stride rate (effect size 0.22 ± 0.11). All but one runner ran with a rearfoot footfall in the minimalist shoe. Improvements in time trial performance were associated with improvements in running economy at 15 km·h(-1) (r = 0.58), with 79% of the improved economy accounted for by reduced shoe mass (P < 0.05). The results suggest that running in minimalist shoes improves running economy and 5-km running performance.

  18. Sex-related differences in the wheel-running activity of mice decline with increasing age.

    PubMed

    Bartling, Babett; Al-Robaiy, Samiya; Lehnich, Holger; Binder, Leonore; Hiebl, Bernhard; Simm, Andreas

    2017-01-01

    Laboratory mice of both sexes having free access to running wheels are commonly used to study mechanisms underlying the beneficial effects of physical exercise on health and aging in human. However, comparative wheel-running activity profiles of male and female mice for a long period of time in which increasing age plays an additional role are unknown. Therefore, we permanently recorded the wheel-running activity (i.e., total distance, median velocity, time of breaks) of female and male mice until 9months of age. Our records indicated higher wheel-running distances for females than males which were highest in 2-month-old mice. This was mainly reached by higher running velocities of the females and not by longer running times. However, the sex-related differences declined in parallel to the age-associated reduction in wheel-running activities. Female mice also showed more variances between the weekly running distances than males, which were recorded most often for females being 4-6months old but not older. Additional records of 24-month-old mice of both sexes indicated highly reduced wheel-running activities at old age. Surprisingly, this reduction at old age resulted mainly from lower running velocities and not from shorter running times. Old mice also differed in their course of night activity which peaked later compared to younger mice. In summary, we demonstrated the influence of sex on the age-dependent activity profile of mice which is somewhat contrasting to humans, and this has to be considered when transferring exercise-mediated mechanism from mouse to human. Copyright © 2016. Published by Elsevier Inc.

  19. Regional sea level variability in a high-resolution global coupled climate model

    NASA Astrophysics Data System (ADS)

    Palko, D.; Kirtman, B. P.

    2016-12-01

    The prediction of trends at regional scales is essential in order to adapt to and prepare for the effects of climate change. However, GCMs are unable to make reliable predictions at regional scales. The prediction of local sea level trends is particularly critical. The main goal of this research is to utilize high-resolution (HR) (0.1° resolution in the ocean) coupled model runs of CCSM4 to analyze regional sea surface height (SSH) trends. Unlike typical, lower resolution (1.0°) GCM runs these HR runs resolve features in the ocean, like the Gulf Stream, which may have a large effect on regional sea level. We characterize the variability of regional SSH along the Atlantic coast of the US using tide gauge observations along with fixed radiative forcing runs of CCSM4 and HR interactive ensemble runs. The interactive ensemble couples an ensemble mean atmosphere with a single ocean realization. This coupling results in a 30% decrease in the strength of the Atlantic meridional overturning circulation; therefore, the HR interactive ensemble is analogous to a HR hosing experiment. By characterizing the variability in these high-resolution GCM runs and observations we seek to understand what processes influence coastal SSH along the Eastern Coast of the United States and better predict future SLR.

  20. Exclusive Preference Develops Less Readily on Concurrent Ratio Schedules with Wheel-Running than with Sucrose Reinforcement

    PubMed Central

    Belke, Terry W

    2010-01-01

    Previous research suggested that allocation of responses on concurrent schedules of wheel-running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule value on one alternative was systematically increased. In one condition, the reinforcer on both alternatives was .1 ml of 7.5% sucrose solution; in the other, it was a 30-s opportunity to run in a wheel. Results showed that the average ratio at which greater than 90% of responses were allocated to the unchanged alternative was higher with wheel-running reinforcement. As the ratio requirement was initially increased, responding strongly shifted toward the unchanged alternative with sucrose, but not with wheel running. Instead, responding initially increased on both alternatives, then subsequently shifted toward the unchanged alternative. Furthermore, changeover responses as a percentage of total responses decreased with sucrose, but not wheel-running reinforcement. Finally, for some animals, responding on the increasing ratio alternative decreased as the ratio requirement increased, but then stopped and did not decline with further increments. The implications of these results for theories of choice are discussed. PMID:21451744

  1. Exclusive preference develops less readily on concurrent ratio schedules with wheel-running than with sucrose reinforcement.

    PubMed

    Belke, Terry W

    2010-09-01

    Previous research suggested that allocation of responses on concurrent schedules of wheel-running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule value on one alternative was systematically increased. In one condition, the reinforcer on both alternatives was .1 ml of 7.5% sucrose solution; in the other, it was a 30-s opportunity to run in a wheel. Results showed that the average ratio at which greater than 90% of responses were allocated to the unchanged alternative was higher with wheel-running reinforcement. As the ratio requirement was initially increased, responding strongly shifted toward the unchanged alternative with sucrose, but not with wheel running. Instead, responding initially increased on both alternatives, then subsequently shifted toward the unchanged alternative. Furthermore, changeover responses as a percentage of total responses decreased with sucrose, but not wheel-running reinforcement. Finally, for some animals, responding on the increasing ratio alternative decreased as the ratio requirement increased, but then stopped and did not decline with further increments. The implications of these results for theories of choice are discussed.

  2. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    ERIC Educational Resources Information Center

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  3. The relationship between aerobic fitness and recovery from high-intensity exercise in infantry soldiers.

    PubMed

    Hoffman, J R

    1997-07-01

    The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.

  4. The reliability of running economy expressed as oxygen cost and energy cost in trained distance runners.

    PubMed

    Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P

    2013-12-01

    This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.

  5. Evidence for prosauropod dinosaur gastroliths in the Bull Run Formation (Upper Triassic, Norian) of Virginia

    USGS Publications Warehouse

    Weems, Robert E.; Culp, Michelle J.; Wings, Oliver

    2007-01-01

    Definitive criteria for distinguishing gastroliths from sedimentary clasts are lacking for many depositional settings, and many reported occurrences of gastroliths either cannot be verified or have been refuted. We discuss four occurrences of gastrolith-like stones (category 6 exoliths) not found within skeletal remains from the Upper Triassic Bull Run Formation of northern Virginia, USA. Despite their lack of obvious skeletal association, the most parsimonious explanation for several characteristics of these stones is their prolonged residence in the gastric mills of large animals. These characteristics include 1) typical gastrolith microscopic surface texture, 2) evidence of pervasive surface wear on many of these stones that has secondarily removed variable amounts of thick weathering rinds typically found on these stones, and 3) a width/length-ratio modal peak for these stones that is more strongly developed than in any population of fluvial or fanglomerate stones of any age found in this region. When taken together, these properties of the stones can be explained most parsimoniously by animal ingestion and gastric-mill abrasion. The size of these stones indicates the animals that swallowed them were large, and the best candidate is a prosauropod dinosaur, possibly an ancestor of the Early Jurassic gastrolith-producing prosauropod Massospondylus or Ammosaurus.Skeletal evidence for Upper Triassic prosauropods is lacking in the Newark Supergroup basins; footprints (Agrestipus hottoni and Eubrontes isp.) from the Bull Run Formation in the Culpeper basin previously ascribed to prosauropods are now known to be underprints (Brachychirotherium parvum) of an aetosaur and underprints (Kayentapus minor) of a ceratosaur. The absence of prosauropod skeletal remains or footprints in all but the uppermost (upper Rhaetian) Triassic rocks of the Newark Supergroup is puzzling because prosauropod remains are abundant elsewhere in the world in Upper Triassic (Carnian, Norian, and lower Rhaetian) continental strata. The apparent scarcity of prosauropods in Upper Triassic strata of the Newark Supergroup is interpreted as an artifact of ecological partitioning, created by the habitat range and dietary preferences of phytosaurs and by the preservational biases at that time within the lithofacies of the Newark Supergroup basins.

  6. Processes Leading to Beaded Channels Formation in Central Yakutia

    NASA Astrophysics Data System (ADS)

    Tarbeeva, A. M.; Lebedeva, L.; Efremov, V. S.; Krylenko, I. V.; Surkov, V. V.

    2017-12-01

    Beaded channels, consisting of deepened and widened pools and connecting narrow runs, are common fluvial forms in permafrost regions. Recent studies have shown that beaded channels are very important for connecting alluvial rivers with headwater lakes allowing fish passage and foraging habitats, as well as regulating river runoff. Beaded channels are known as typical thermokarst landforms; however, there is no evidence of their origin and formative processes. Geomorphological analyzes of beaded channels have been completed in several permafrost regions including field observations of Shestakovka River in Central Yakutia. The study aims to recognize the modern exogenic processes and formative mechanisms of beaded river channels. We show that beaded channel of Shestakovka River form in the perennially frozen sand with low ice content, leading us to hypothesize that thermokarst is not the main process of formation. Due to the significant volume of water, the pools don't freeze over entirely during winters, even under harsh climatic conditions. As a result, lenses of pressurized water remain under surface ice underlain by perennially thawed sediments. The presence of thawed sediments under the pools and frozen sediments under the runs leads to uneven thermoerosion of the riverbed during floods, providing the beaded form of the channel. In addition, freezing of pools during winter leads to pressure increasing under ice cover and formation of ice mounds, which crack several times during winter leading to disturbance of riverbanks. Many 1st to 3rd order streams have a specific transitional meandering-to-beaded form resembling the shape of unconfined meandering rivers, but consisting of pools and runs. However, such channels exhibit no evidences of present-day erosion of concave banks and sediment accumulation at the convex banks as typically being observed in normally meandering rivers. Such forms of channels indicates that their formation occurred by the greater channel-forming flow discharges in the past. Transition to the beaded channel planform took place only later, presumably as a result of climate changes. Reduction of water runoff and freezing over of taliks leaded to activation of cryogenic processes (thermokarst, uneven thermoerosion, disturbance of riverbanks during the cracking of ice mounds).

  7. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  8. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  9. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  10. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  11. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...

  12. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  13. Modulation of Soil Initial State on WRF Model Performance Over China

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Jin, Qinjian; Yi, Bingqi; Mullendore, Gretchen L.; Zheng, Xiaohui; Jin, Hongchun

    2017-11-01

    The soil state (e.g., temperature and moisture) in a mesoscale numerical prediction model is typically initialized by reanalysis or analysis data that may be subject to large bias. Such bias may lead to unrealistic land-atmosphere interactions. This study shows that the Climate Forecast System Reanalysis (CFSR) dramatically underestimates soil temperature and overestimates soil moisture over most parts of China in the first (0-10 cm) and second (10-25 cm) soil layers compared to in situ observations in July 2013. A correction based on the global optimal dual kriging is employed to correct CFSR bias in soil temperature and moisture using in situ observations. To investigate the impacts of the corrected soil state on model forecasts, two numerical model simulations—a control run with CFSR soil state and a disturbed run with the corrected soil state—were conducted using the Weather Research and Forecasting model. All the simulations are initiated 4 times per day and run 48 h. Model results show that the corrected soil state, for example, warmer and drier surface over the most parts of China, can enhance evaporation over wet regions, which changes the overlying atmospheric temperature and moisture. The changes of the lifting condensation level, level of free convection, and water transport due to corrected soil state favor precipitation over wet regions, while prohibiting precipitation over dry regions. Moreover, diagnoses indicate that the remote moisture flux convergence plays a dominant role in the precipitation changes over the wet regions.

  14. Rollout and Turnoff (ROTO) Guidance and Information Displays: Effect on Runway Occupancy Time in Simulated Low-Visibility Landings

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith

    2001-01-01

    This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.

  15. Carbohydrate Nutrition and Team Sport Performance.

    PubMed

    Williams, Clyde; Rollo, Ian

    2015-11-01

    The common pattern of play in 'team sports' is 'stop and go', i.e. where players perform repeated bouts of brief high-intensity exercise punctuated by lower intensity activity. Sprints are generally 2-4 s long and recovery between sprints is of variable length. Energy production during brief sprints is derived from the degradation of intra-muscular phosphocreatine and glycogen (anaerobic metabolism). Prolonged periods of multiple sprints drain muscle glycogen stores, leading to a decrease in power output and a reduction in general work rate during training and competition. The impact of dietary carbohydrate interventions on team sport performance have been typically assessed using intermittent variable-speed shuttle running over a distance of 20 m. This method has evolved to include specific work to rest ratios and skills specific to team sports such as soccer, rugby and basketball. Increasing liver and muscle carbohydrate stores before sports helps delay the onset of fatigue during prolonged intermittent variable-speed running. Carbohydrate intake during exercise, typically ingested as carbohydrate-electrolyte solutions, is also associated with improved performance. The mechanisms responsible are likely to be the availability of carbohydrate as a substrate for central and peripheral functions. Variable-speed running in hot environments is limited by the degree of hyperthermia before muscle glycogen availability becomes a significant contributor to the onset of fatigue. Finally, ingesting carbohydrate immediately after training and competition will rapidly recover liver and muscle glycogen stores.

  16. Static Stretching Alters Neuromuscular Function and Pacing Strategy, but Not Performance during a 3-Km Running Time-Trial

    PubMed Central

    Damasceno, Mayara V.; Duarte, Marcos; Pasqua, Leonardo A.; Lima-Silva, Adriano E.; MacIntosh, Brian R.; Bertuzzi, Rômulo

    2014-01-01

    Purpose Previous studies report that static stretching (SS) impairs running economy. Assuming that pacing strategy relies on rate of energy use, this study aimed to determine whether SS would modify pacing strategy and performance in a 3-km running time-trial. Methods Eleven recreational distance runners performed a) a constant-speed running test without previous SS and a maximal incremental treadmill test; b) an anthropometric assessment and a constant-speed running test with previous SS; c) a 3-km time-trial familiarization on an outdoor 400-m track; d and e) two 3-km time-trials, one with SS (experimental situation) and another without (control situation) previous static stretching. The order of the sessions d and e were randomized in a counterbalanced fashion. Sit-and-reach and drop jump tests were performed before the 3-km running time-trial in the control situation and before and after stretching exercises in the SS. Running economy, stride parameters, and electromyographic activity (EMG) of vastus medialis (VM), biceps femoris (BF) and gastrocnemius medialis (GA) were measured during the constant-speed tests. Results The overall running time did not change with condition (SS 11:35±00:31 s; control 11:28±00:41 s, p = 0.304), but the first 100 m was completed at a significantly lower velocity after SS. Surprisingly, SS did not modify the running economy, but the iEMG for the BF (+22.6%, p = 0.031), stride duration (+2.1%, p = 0.053) and range of motion (+11.1%, p = 0.0001) were significantly modified. Drop jump height decreased following SS (−9.2%, p = 0.001). Conclusion Static stretch impaired neuromuscular function, resulting in a slow start during a 3-km running time-trial, thus demonstrating the fundamental role of the neuromuscular system in the self-selected speed during the initial phase of the race. PMID:24905918

  17. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  18. An alternative approach to the Army Physical Fitness Test two-mile run using critical velocity and isoperformance curves.

    PubMed

    Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R

    2012-02-01

    The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.

  19. Institutionalizing Faculty Mentoring within a Community of Practice Model

    ERIC Educational Resources Information Center

    Smith, Emily R.; Calderwood, Patricia E.; Storms, Stephanie Burrell; Lopez, Paula Gill; Colwell, Ryan P.

    2016-01-01

    In higher education, faculty work is typically enacted--and rewarded--on an individual basis. Efforts to promote collaboration run counter to the individual and competitive reward systems that characterize higher education. Mentoring initiatives that promote faculty collaboration and support also defy the structural and cultural norms of higher…

  20. The impact of firefighter personal protective equipment and treadmill protocol on maximal oxygen uptake.

    PubMed

    Lee, Joo-Young; Bakri, Ilham; Kim, Jung-Hyun; Son, Su-Young; Tochihara, Yutaka

    2013-01-01

    This study investigated the effects of firefighter personal protective equipment (PPE) on the determination of maximal oxygen uptake (VO(2max)) while using two different treadmill protocols: a progressive incline protocol (PIP) and a progressive speed protocol (PSP), with three clothing conditions (Light-light clothing; Boots-PPE with rubber boots; Shoes-PPE with running shoes). Bruce protocol with Light was performed for a reference test. Results showed there was no difference in VO(2max) between Bruce Light, PIP Light, and PSP Light. However, VO(2max) was reduced in Boots and Shoes with shortened maximal performance time (7 and 6 min reduced for PIP Boots and Shoes, respectively; 11 and 9 min reduced for PSP Boots and Shoes, respectively), whereas the increasing rate of VO(2) in Boots and Shoes during submaximal exercise was greater compared with Light. Wearing firefighter boots compared with wearing running shoes also significantly affected submaximal VO(2) but not VO(2max). These results suggest that firefighters' maximal performance determined from a typical VO(2max) test without wearing PPE may overestimate the actual performance capability of firefighters wearing PPE.

  1. The Impact of Firefighter Personal Protective Equipment and Treadmill Protocol on Maximal Oxygen Uptake

    PubMed Central

    Lee, Joo-Young; Bakri, Ilham; Kim, Jung-Hyun; Son, Su-Young; Tochihara, Yutaka

    2015-01-01

    This study investigated the effects of firefighter personal protective equipment (PPE) on the determination of maximal oxygen uptake (VO2max) while using two different treadmill protocols: a progressive incline protocol (PIP) and a progressive speed protocol (PSP), with three clothing conditions (Light-light clothing; Boots-PPE with rubber boots; Shoes-PPE with running shoes). Bruce protocol with Light was performed for a reference test. Results showed there was no difference in VO2max between Bruce Light, PIP Light, and PSP Light. However, VO2max was reduced in Boots and Shoes with shortened maximal performance time (7 and 6 min reduced for PIP Boots and Shoes, respectively; 11 and 9 min reduced for PSP Boots and Shoes, respectively), whereas the increasing rate of VO2 in Boots and Shoes during submaximal exercise was greater compared with Light. Wearing firefighter boots compared with wearing running shoes also significantly affected submaximal VO2 but not VO2max. These results suggest that firefighters’ maximal performance determined from a typical VO2max test without wearing PPE may overestimate the actual performance capability of firefighters wearing PPE. PMID:23668854

  2. Models@Home: distributed computing in bioinformatics using a screensaver based approach.

    PubMed

    Krieger, Elmar; Vriend, Gert

    2002-02-01

    Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.

  3. Comparison of Sprint and Run Times with Performance on the Wingate Anaerobic Test.

    ERIC Educational Resources Information Center

    Tharp, Gerald D.; And Others

    1985-01-01

    Male volunteers were studied to examine the relationship between the Wingate Anaerobic Test (WAnT) and sprint-run times and to determine the influence of age and weight. Results indicate the WAnT is a moderate predictor of dash and run times but becomes a stronger predictor when adjusted for body weight. (Author/MT)

  4. 12 CFR 1102.306 - Procedures for requesting records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... section; (B) Where the running of such time is suspended for the calculation of a cost estimate for the... section; (C) Where the running of such time is suspended for the payment of fees pursuant to the paragraph... of the invoice. (ix) The time limit for the ASC to respond to a request will not begin to run until...

  5. Mechanisms for regulating step length while running towards and over an obstacle.

    PubMed

    Larsen, Roxanne J; Jackson, William H; Schmitt, Daniel

    2016-10-01

    The ability to run across uneven terrain with continuous stable movement is critical to the safety and efficiency of a runner. Successful step-to-step stabilization while running may be mediated by minor adjustments to a few key parameters (e.g., leg stiffness, step length, foot strike pattern). However, it is not known to what degree runners in relatively natural settings (e.g., trails, paved road, curbs) use the same strategies across multiple steps. This study investigates how three readily measurable running parameters - step length, foot placement, and foot strike pattern - are adjusted in response to encountering a typical urban obstacle - a sidewalk curb. Thirteen subjects were video-recorded as they ran at self-selected slow and fast paces. Runners targeted a specific distance before the curb for foot placement, and lengthened their step over the curb (p<0.0001) regardless of where the step over the curb was initiated. These strategies of adaptive locomotion disrupt step cycles temporarily, and may increase locomotor cost and muscle loading, but in the end assure dynamic stability and minimize the risk of injury over the duration of a run. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation

    NASA Astrophysics Data System (ADS)

    Samana, A. R.; Krmpotić, F.; Bertulani, C. A.

    2010-06-01

    A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.

  7. Acute differences in foot strike and spatiotemporal variables for shod, barefoot or minimalist male runners.

    PubMed

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-05-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.

  8. Acute Differences in Foot Strike and Spatiotemporal Variables for Shod, Barefoot or Minimalist Male Runners

    PubMed Central

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-01-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480

  9. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    PubMed Central

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid genome sequence was generated for a significant reduction in time and cost over traditional shotgun-based genome sequencing techniques, although with approximately half the coverage of previously reported GS 20 de novo genome sequence. The GS 20 should be broadly applicable to angiosperm plastid genome sequencing, and therefore promises to expand the scale of plant genetic and phylogenetic research dramatically. PMID:16934154

  10. Short-term heat acclimation improves the determinants of endurance performance and 5-km running performance in the heat.

    PubMed

    James, Carl A; Richardson, Alan J; Watt, Peter W; Willmott, Ashley G B; Gibson, Oliver R; Maxwell, Neil S

    2017-03-01

    This study investigated the effect of 5 days of controlled short-term heat acclimation (STHA) on the determinants of endurance performance and 5-km performance in runners, relative to the impairment afforded by moderate heat stress. A control group (CON), matched for total work and power output (2.7 W·kg -1 ), differentiated thermal and exercise contributions of STHA on exercise performance. Seventeen participants (10 STHA, 7 CON) completed graded exercise tests (GXTs) in cool (13 °C, 50% relative humidity (RH), pre-training) and hot conditions (32 °C, 60% RH, pre- and post-training), as well as 5-km time trials (TTs) in the heat, pre- and post-training. STHA reduced resting (p = 0.01) and exercising (p = 0.04) core temperature alongside a smaller change in thermal sensation (p = 0.04). Both groups improved the lactate threshold (LT, p = 0.021), lactate turnpoint (LTP, p = 0.005) and velocity at maximal oxygen consumption (vV̇O 2max ; p = 0.031) similarly. Statistical differences between training methods were observed in TT performance (STHA, -6.2(5.5)%; CON, -0.6(1.7)%, p = 0.029) and total running time during the GXT (STHA, +20.8(12.7)%; CON, +9.8(1.2)%, p = 0.006). There were large mean differences in change in maximal oxygen consumption between STHA +4.0(2.2) mL·kg -1 ·min -1 (7.3(4.0)%) and CON +1.9(3.7) mL·kg -1 ·min -1 (3.8(7.2)%). Running economy (RE) deteriorated following both training programmes (p = 0.008). Similarly, RE was impaired in the cool GXT, relative to the hot GXT (p = 0.004). STHA improved endurance running performance in comparison with work-matched normothermic training, despite equality of adaptation for typical determinants of performance (LT, LTP, vV̇O 2max ). Accordingly, these data highlight the ergogenic effect of STHA, potentially via greater improvements in maximal oxygen consumption and specific thermoregulatory and associated thermal perception adaptations absent in normothermic training.

  11. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    PubMed

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    PubMed

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  13. A Monotonic Degradation Assessment Index of Rolling Bearings Using Fuzzy Support Vector Data Description and Running Time

    PubMed Central

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε̄ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε̄ describes the accelerating relationships between the damage development and running time. However, the index ε̄ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε̄ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly. PMID:23112591

  14. A monotonic degradation assessment index of rolling bearings using fuzzy support vector data description and running time.

    PubMed

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε⁻ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε⁻ describes the accelerating relationships between the damage development and running time. However, the index ε⁻ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε⁻ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly.

  15. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  16. 77 FR 50198 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...

  17. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  18. Characterizing the Mechanical Properties of Running-Specific Prostheses

    PubMed Central

    Beck, Owen N.; Taboga, Paolo; Grabowski, Alena M.

    2016-01-01

    The mechanical stiffness of running-specific prostheses likely affects the functional abilities of athletes with leg amputations. However, each prosthetic manufacturer recommends prostheses based on subjective stiffness categories rather than performance based metrics. The actual mechanical stiffness values of running-specific prostheses (i.e. kN/m) are unknown. Consequently, we sought to characterize and disseminate the stiffness values of running-specific prostheses so that researchers, clinicians, and athletes can objectively evaluate prosthetic function. We characterized the stiffness values of 55 running-specific prostheses across various models, stiffness categories, and heights using forces and angles representative of those measured from athletes with transtibial amputations during running. Characterizing prosthetic force-displacement profiles with a 2nd degree polynomial explained 4.4% more of the variance than a linear function (p<0.001). The prosthetic stiffness values of manufacturer recommended stiffness categories varied between prosthetic models (p<0.001). Also, prosthetic stiffness was 10% to 39% less at angles typical of running 3 m/s and 6 m/s (10°-25°) compared to neutral (0°) (p<0.001). Furthermore, prosthetic stiffness was inversely related to height in J-shaped (p<0.001), but not C-shaped, prostheses. Running-specific prostheses should be tested under the demands of the respective activity in order to derive relevant characterizations of stiffness and function. In all, our results indicate that when athletes with leg amputations alter prosthetic model, height, and/or sagittal plane alignment, their prosthetic stiffness profiles also change; therefore variations in comfort, performance, etc. may be indirectly due to altered stiffness. PMID:27973573

  19. 40 CFR Table 1b to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...

  20. The influence of training and mental skills preparation on injury incidence and performance in marathon runners.

    PubMed

    Hamstra-Wright, Karrie L; Coumbe-Lilley, John E; Kim, Hajwa; McFarland, Jose A; Huxel Bliven, Kellie C

    2013-10-01

    There has been a considerable increase in the number of participants running marathons over the past several years. The 26.2-mile race requires physical and mental stamina to successfully complete it. However, studies have not investigated how running and mental skills preparation influence injury and performance. The purpose of our study was to describe the training and mental skills preparation of a typical group of runners as they began a marathon training program, assess the influence of training and mental skills preparation on injury incidence, and examine how training and mental skills preparation influence marathon performance. Healthy adults (N = 1,957) participating in an 18-week training program for a fall 2011 marathon were recruited for the study. One hundred twenty-five runners enrolled and received 4 surveys: pretraining, 6 weeks, 12 weeks, posttraining. The pretraining survey asked training and mental skills preparation questions. The 6- and 12-week surveys asked about injury incidence. The posttraining survey asked about injury incidence and marathon performance. Tempo runs during training preparation had a significant positive relationship to injury incidence in the 6-week survey (ρ[93] = 0.26, p = 0.01). The runners who reported incorporating tempo and interval runs, running more miles per week, and running more days per week in their training preparation ran significantly faster than did those reporting less tempo and interval runs, miles per week, and days per week (p ≤ 0.05). Mental skills preparation did not influence injury incidence or marathon performance. To prevent injury, and maximize performance, while marathon training, it is important that coaches and runners ensure that a solid foundation of running fitness and experience exists, followed by gradually building volume, and then strategically incorporating runs of various speeds and distances.

  1. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  2. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  3. Novel algorithm for a smartphone-based 6-minute walk test application: algorithm, application development, and evaluation.

    PubMed

    Capela, Nicole A; Lemaire, Edward D; Baddour, Natalie

    2015-02-20

    The 6-minute walk test (6MWT: the maximum distance walked in 6 minutes) is used by rehabilitation professionals as a measure of exercise capacity. Today's smartphones contain hardware that can be used for wearable sensor applications and mobile data analysis. A smartphone application can run the 6MWT and provide typically unavailable biomechanical information about how the person moves during the test. A new algorithm for a calibration-free 6MWT smartphone application was developed that uses the test's inherent conditions and smartphone accelerometer-gyroscope data to report the total distance walked, step timing, gait symmetry, and walking changes over time. This information is not available with a standard 6MWT and could help with clinical decision-making. The 6MWT application was evaluated with 15 able-bodied participants. A BlackBerry Z10 smartphone was worn on a belt at the mid lower back. Audio from the phone instructed the person to start and stop walking. Digital video was independently recorded during the trial as a gold-standard comparator. The average difference between smartphone and gold standard foot strike timing was 0.014 ± 0.015 s. The total distance calculated by the application was within 1 m of the measured distance for all but one participant, which was more accurate than other smartphone-based studies. These results demonstrated that clinically relevant 6MWT results can be achieved with typical smartphone hardware and a novel algorithm.

  4. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  5. Research study for effects of case flexibility on bearing loads and rotor stability

    NASA Technical Reports Server (NTRS)

    Fenwick, J. R.; Tarn, R. B.

    1984-01-01

    Methods to evaluate the effect of casing flexibility on rotor stability and component loads were developed. Recent Rocketdyne turbomachinery was surveyed to determine typical properties and frequencies versus running speed. A small generic rotor was run with a flexible case with parametric variations in casing properties for comparison with a rotor attached to rigid supports. A program for the IBM personal computer for interactive evaluation of rotors and casings is developed. The Root locus method is extended for use in rotor dynamics for symmetrical systems by transforming all motion and coupling into a single plane and using a 90 degree criterion when plotting loci.

  6. Agricultural Airplane Mission Time Structure Characteristics

    NASA Technical Reports Server (NTRS)

    Jewel, J. W., Jr.

    1982-01-01

    The time structure characteristics of agricultural airplane missions were studied by using records from NASA VGH flight recorders. Flight times varied from less than 3 minutes to more than 103 minutes. There was a significant reduction in turning time between spreading runs as pilot experience in the airplane type increased. Spreading runs accounted for only 25 to 29 percent of the flight time of an agricultural airplane. Lowering the longitudinal stick force appeared to reduce both the turning time between spreading runs and pilot fatigue at the end of a working day.

  7. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  8. Lawrence Livermore National Laboratory and Sandia National Laboratory Nuclear Accident Dosimetry Support of IER 252 and the Dose Characterization of the Flattop Reactor at the DAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hickman, D. P.; Jeffers, K. L.; Radev, R. P.

    In support of IER 252 “Characterization of the Flattop Reactor at the NCERC”, LLNL performed ROSPEC measurements of the neutron spectrum and deployed 129 Personnel Nuclear Accident Dosimeters (PNAD) to establish the need for height corrections and verification of neutron spectrum evaluation of the fluences and dose. A very limited number of heights (typically only one or two heights) can be measured using neutron spectrometers, therefore it was important to determine if any height correction would be needed in future intercomparisons and studies. Specific measurement positions around the Flatttop reactor are provided in Figure 1. Table 1 provides run andmore » position information for LLNL measurements. The LLNL ROSPEC (R2) was used for run numbers 1 – 7, and vi. PNADs were positioned on trees during run numbers 9, 11, and 13.« less

  9. 76 FR 13683 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-14

    ... To Move the Time at Which It Runs Its Daily Morning Pass March 8, 2011. Pursuant to Section 19(b)(1... Backed Securities Division (``MBSD'') intends to move the time at which it runs its daily morning pass... notify participants that MBSD intends to move the time at which it runs its daily morning pass from 10:30...

  10. Mechanics and energetics of human locomotion on sand.

    PubMed

    Lejeune, T M; Willems, P A; Heglund, N C

    1998-07-01

    Moving about in nature often involves walking or running on a soft yielding substratum such as sand, which has a profound effect on the mechanics and energetics of locomotion. Force platform and cinematographic analyses were used to determine the mechanical work performed by human subjects during walking and running on sand and on a hard surface. Oxygen consumption was used to determine the energetic cost of walking and running under the same conditions. Walking on sand requires 1.6-2.5 times more mechanical work than does walking on a hard surface at the same speed. In contrast, running on sand requires only 1.15 times more mechanical work than does running on a hard surface at the same speed. Walking on sand requires 2.1-2.7 times more energy expenditure than does walking on a hard surface at the same speed; while running on sand requires 1.6 times more energy expenditure than does running on a hard surface. The increase in energy cost is due primarily to two effects: the mechanical work done on the sand, and a decrease in the efficiency of positive work done by the muscles and tendons.

  11. Isocapnic hyperpnea training improves performance in competitive male runners.

    PubMed

    Leddy, John J; Limprasertkul, Atcharaporn; Patel, Snehal; Modlich, Frank; Buyea, Cathy; Pendergast, David R; Lundgren, Claes E G

    2007-04-01

    The effects of voluntary isocapnic hyperpnea (VIH) training (10 h over 4 weeks, 30 min/day) on ventilatory system and running performance were studied in 15 male competitive runners, 8 of whom trained twice weekly for 3 more months. Control subjects (n = 7) performed sham-VIH. Vital capacity (VC), FEV1, maximum voluntary ventilation (MVV), maximal inspiratory and expiratory mouth pressures, VO2max, 4-mile run time, treadmill run time to exhaustion at 80% VO2max, serum lactate, total ventilation (V(E)), oxygen consumption (VO2) oxygen saturation and cardiac output were measured before and after 4 weeks of VIH. Respiratory parameters and 4-mile run time were measured monthly during the 3-month maintenance period. There were no significant changes in post-VIH VC and FEV1 but MVV improved significantly (+10%). Maximal inspiratory and expiratory mouth pressures, arterial oxygen saturation and cardiac output did not change post-VIH. Respiratory and running performances were better 7- versus 1 day after VIH. Seven days post-VIH, respiratory endurance (+208%) and treadmill run time (+50%) increased significantly accompanied by significant reductions in respiratory frequency (-6%), V(E) (-7%), VO2 (-6%) and lactate (-18%) during the treadmill run. Post-VIH 4-mile run time did not improve in the control group whereas it improved in the experimental group (-4%) and remained improved over a 3 month period of reduced VIH frequency. The improvements cannot be ascribed to improved blood oxygen delivery to muscle or to psychological factors.

  12. Lead-Tin Telluride Sputtered Thin Films for Infrared Sensors

    DTIC Science & Technology

    1975-06-01

    Concentrations in Pfa .78Sn,22Te Fllms " Tar8et *10 Effect of Substrate Bias Voltage on As-Deposited Carrier Concentration - Target #12 Effect of...be laid down in one deposition run with one target by simple bias voltage control. 3252 Sorption Coef"^^ *"* Energy Gaps. Typical plots

  13. University Research Funding: The United States Is Behind and Falling

    ERIC Educational Resources Information Center

    Atkinson, Robert D.; Stewart, Luke A.

    2011-01-01

    Research and development drives innovation and innovation drives long-run economic growth, creating jobs and improving living standards in the process. University-based research is of particular importance to innovation, as the early-stage research that is typically performed at universities serves to expand the knowledge pool from which the…

  14. A Competitive Nonverbal False Belief Task for Children and Apes

    ERIC Educational Resources Information Center

    Krachun, Carla; Carpenter, Malinda; Call, Josep; Tomasello, Michael

    2009-01-01

    A nonverbal false belief task was administered to children (mean age 5 years) and two great ape species: chimpanzees ("Pan troglodytes") and bonobos ("Pan paniscus"). Because apes typically perform poorly in cooperative contexts, our task was competitive. Two versions were run: in both, a human competitor witnessed an experimenter hide a reward in…

  15. Race, Memory, and Master Narratives: A Critical Essay on U.S. Curriculum History

    ERIC Educational Resources Information Center

    Brown, Anthony L.; Au, Wayne

    2014-01-01

    The field of curriculum studies has a history of looking at its own past, summarizing and synthesizing the trends and patterns across its foundations. Whether through synoptic texts, historical analyses, or edited collections, the field's foundational retrospection typically traces a lineage of curriculum studies that runs through various…

  16. Growing Wheat. People on the Farm.

    ERIC Educational Resources Information Center

    Department of Agriculture, Washington, DC. Office of Governmental and Public Affairs.

    This booklet, one in a series about life on modern farms, describes the daily life of the Don Riffel family, wheat farmers in Kansas. Beginning with early morning, the booklet traces the family's activities through a typical harvesting day in July, while explaining how a wheat farm is run. The booklet also briefly describes the wheat growing…

  17. Tensions Mark Relationships between New Organizations and Teachers' Unions

    ERIC Educational Resources Information Center

    Sawchuk, Stephen

    2012-01-01

    As a new breed of national education advocacy organizations gains clout, they're entering into often-uneasy relationships with teachers' unions--and running into a debate about whether they can play a grassroots "ground game" comparable to that of labor. For many unions, the policy changes the newer groups typically support--staffing based on…

  18. The Top Six Compatibles: A Closer Look at the Machines That Are Most Compatible with the IBM PC.

    ERIC Educational Resources Information Center

    McMullen, Barbara E.; And Others

    1984-01-01

    Reviews six operationally compatible microcomputers that are most able to run IBM software without modifications--Compaq, Columbia, Corona, Hyperion, Eagle PC, and Chameleon. Information given for each includes manufacturer, uses, standard features, base list price, typical system price, and options and accessories. (MBR)

  19. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  20. Cutting Red Tape: Overcoming State Bureaucracies to Develop High-Performing State Education Agencies

    ERIC Educational Resources Information Center

    Hanna, Robert; Morrow, Jeffrey S.; Rozen, Marci

    2014-01-01

    States serve a special role in the nation's public education system. Through elected legislatures, states have endowed their various state departments of education with powers over public education, which include granting authority to local entities--typically school districts--to run schools. In their oversight capacity, states--traditionally…

  1. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  2. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  3. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  4. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  5. Plantar loading and foot-strike pattern changes with speed during barefoot running in those with a natural rearfoot strike pattern while shod.

    PubMed

    Cooper, Danielle M; Leissring, Sarah K; Kernozek, Thomas W

    2015-06-01

    Claims of injury reduction related to barefoot running has resulted in interest from the running public; however, its risks are not well understood for those who typically wear cushioned footwear. Examine how plantar loading changes during barefoot running in a group of runners that ordinarily wear cushioned footwear and demonstrate a rearfoot strike pattern (RFSP) without cueing or feedback alter their foot strike pattern and plantar loading when asked to run barefoot at different speeds down a runway. Forty-one subjects ran barefoot at three different speeds across a pedography platform which collected plantar loading variables for 10 regions of the foot; data were analyzed using two-way mixed multivariate analysis of variance (MANOVA). A significant foot strike position (FSP)×speed interaction in each of the foot regions indicated that plantar loading differed based on FSP across the different speeds. The RFSP provided the highest total forces across the foot while the pressures displayed in subjects with a non-rearfoot strike pattern (NRFSP) was more similar between each of the metatarsals. The majority of subjects ran barefoot with a NRFSP and demonstrated lower total forces and more uniform force distribution across the metatarsal regions. This may have an influence in injuries sustained in barefoot running. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A Portable FTIR Analyser for Field Measurements of Trace Gases and their Isotopologues: CO2, CH4, N2O, CO, del13C in CO2 and delD in water vapour

    NASA Astrophysics Data System (ADS)

    Griffith, D. W.; Bryant, G. R.; Deutscher, N. M.; Wilson, S. R.; Kettlewell, G.; Riggenbach, M.

    2007-12-01

    We describe a portable Fourier Transform InfraRed (FTIR) analyser capable of simultaneous high precision analysis of CO2, CH4, N2O and CO in air, as well as δ13C in CO2 and δD in water vapour. The instrument is based on a commercial 1 cm-1 resolution FTIR spectrometer fitted with a mid-IR globar source, 26 m multipass White cell and thermoelectrically-cooled MCT detector operating between 2000 and 7500 cm-1. Air is passed through the cell and analysed in real time without any pre-treatment except for (optional) drying. An inlet selection manifold allows automated sequential analysis of samples from one or more inlet lines, with typical measurement times of 1-10 minutes per sample. The spectrometer, inlet sampling sequence, real-time quantitative spectrum analysis, data logging and display are all under the control of a single program running on a laptop PC, and can be left unattended for continuous measurements over periods of weeks to months. Selected spectral regions of typically 100-200 cm-1 width are analysed by a least squares fitting technique to retrieve concentrations of trace gases, 13CO2 and HDO. Typical precision is better than 0.1% without the need for calibration gases. Accuracy is similar if measurements are referenced to calibration standard gases. δ13C precision is typically around 0.1‰, and for δD it is 1‰. Applications of the analyser include clean and polluted air monitoring, tower-based flux measurements such as flux gradient or integrated horizontal flux measurements, automated soil chambers, and field-based measurements of isotopic fractionation in soil-plant-atmosphere systems. The simultaneous multi-component advantages can be exploited in tracer-type emission measurements, for example of CH4 from livestock using a co-released tracer gas and downwind measurement. We have also developed an open path variant especially suited to tracer release studies and measurements of NH3 emissions from agricultural sources. An illustrative selection of applications will be presented.

  7. Pathways into prostitution among female jail detainees and their implications for mental health services.

    PubMed

    McClanahan, S F; McClelland, G M; Abram, K M; Teplin, L A

    1999-12-01

    To explore the service needs of women in jail, the authors examined three pathways into prostitution: childhood sexual victimization, running away, and drug use. Studies typically have explored only one or two of these pathways, and the relationships among the three points of entry remain unclear. Data on 1,142 female jail detainees were used to examine the effects of childhood sexual victimization, running away, and drug use on entry into prostitution and their differential effects over the life course. Two distinct pathways into prostitution were identified. Running away had a dramatic effect on entry into prostitution in early adolescence, but little effect later in the life course. Childhood sexual victimization, by contrast, nearly doubled the odds of entry into prostitution throughout the lives of women. Although the prevalence of drug use was significantly higher among prostitutes than among nonprostitutes, drug abuse did not explain entry into prostitution. Running away and childhood sexual victimization provide distinct pathways into prostitution. The findings suggest that women wishing to leave prostitution may benefit from different mental health service strategies depending on which pathway to prostitution they experienced.

  8. Some tests of flat plate photovoltaic module cell temperatures in simulated field conditions

    NASA Technical Reports Server (NTRS)

    Griffith, J. S.; Rathod, M. S.; Paslaski, J.

    1981-01-01

    The nominal operating cell temperature (NOCT) of solar photovoltaic (PV) modules is an important characteristic. Typically, the power output of a PV module decreases 0.5% per deg C rise in cell temperature. Several tests were run with artificial sun and wind to study the parametric dependencies of cell temperature on wind speed and direction and ambient temperature. It was found that the cell temperature is extremely sensitive to wind speed, moderately so to wind direction and rather insensitive to ambient temperature. Several suggestions are made to obtain data more typical of field conditions.

  9. Entrapment Neuropathies of the Foot and Ankle.

    PubMed

    Ferkel, Eric; Davis, William Hodges; Ellington, John Kent

    2015-10-01

    Posterior tarsal tunnel syndrome is the result of compression of the posterior tibial nerve. Anterior tarsal tunnel syndrome (entrapment of the deep peroneal nerve) typically presents with pain radiating to the first dorsal web space. Distal tarsal tunnel syndrome results from entrapment of the first branch of the lateral plantar nerve and is often misdiagnosed initially as plantar fasciitis. Medial plantar nerve compression is seen most often in running athletes, typically with pain radiating to the medial arch. Morton neuroma is often seen in athletes who place their metatarsal arches repetitively in excessive hyperextension. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Velocity changes, long runs, and reversals in the Chromatium minus swimming response.

    PubMed Central

    Mitchell, J G; Martinez-Alonso, M; Lalucat, J; Esteve, I; Brown, S

    1991-01-01

    The velocity, run time, path curvature, and reorientation angle of Chromatium minus were measured as a function of light intensity, temperature, viscosity, osmotic pressure, and hydrogen sulfide concentration. C. minus changed both velocity and run time. Velocity decreased with increasing light intensity in sulfide-depleted cultures and increased in sulfide-replete cultures. The addition of sulfide to cultures grown at low light intensity (10 microeinsteins m-2 s-1) caused mean run times to increase from 10.5 to 20.6 s. The addition of sulfide to cultures grown at high light intensity (100 microeinsteins m-2 s-1) caused mean run times to decrease from 15.3 to 7.7 s. These changes were maintained for up to an hour and indicate that at least some members of the family Chromatiaceae simultaneously modulate velocity and turning frequency for extended periods as part of normal taxis. Images PMID:1991736

  11. Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS

    NASA Astrophysics Data System (ADS)

    Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.

    2013-12-01

    We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.

  12. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  13. How smart is your BEOL? productivity improvement through intelligent automation

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.

  14. Watershed Characteristics and Pre-Restoration Surface-Water Hydrology of Minebank Run, Baltimore County, Maryland, Water Years 2002-04

    USGS Publications Warehouse

    Doheny, Edward J.; Starsoneck, Roger J.; Striz, Elise A.; Mayer, Paul M.

    2006-01-01

    Stream restoration efforts have been ongoing in Maryland since the early 1990s. Physical stream restoration often involves replacement of lost sediments to elevate degraded streambeds, re-establishment of riffle-pool sequences along the channel profile, planting vegetation in riparian zones, and re-constructing channel banks, point bars, flood plains, and stream-meanders. The primary goal of many restoration efforts is to re-establish geomorphic stability of the stream channel and reduce erosive energy from urban runoff. Monitoring streams prior to and after restoration could help quantify other possible benefits of stream restoration, such as improved water quality and biota. This report presents general watershed characteristics associated with the Minebank Run watershed; a small, urban watershed in the south-central section of Baltimore County, Maryland that was physically restored in phases during 1999, 2004, and 2005. The physiography, geology, hydrology, land use, soils, and pre-restoration geomorphic setting of the unrestored stream channel are discussed. The report describes a reach of Minebank Run that was selected for the purpose of collecting several types of environmental data prior to restoration, including continuous-record and partial-record stage and streamflow data, precipitation, and ground-water levels. Examples of surface-water data that were collected in and near the study reach during water years 2002 through 2004, including continuous-record streamflow, partial-record stage and discharge, and precipitation, are described. These data were used in analyses of several characteristics of surface-water hydrology in the watershed, including (1) rainfall totals, storm duration, and intensity, (2) instantaneous peak discharge and daily mean discharge, (3) stage-discharge ratings, (4) hydraulic-geometry relations, (5) water-surface slope, (6) time of concentration, (7) flood frequency, (8) flood volume, and (9) rainfall-runoff relations. Several hydrologic characteristics that are typical of urban environments were quantified by these analyses. These include (1) large ratios of peak discharge to daily mean discharge as an indicator of flashiness, (2) consistent shifting of the stage-discharge rating over short periods of time that indicates instability of the stream channel, (3) analyses of hydraulic-geometry relations that indicate mean velocities of 11 feet per second or more while the flow is contained in the stream channel, (4) discharges that are 4 to 5 times larger in Minebank Run for corresponding flood frequency recurrence intervals than in Slade Run, which is a Piedmont watershed of similar size with smaller percentages of urban development, and (5) flood waves that can travel through the stream channel at a velocity of 412 feet per minute, or 6.9 feet per second.

  15. Orbital dynamics in galaxy mergers

    NASA Astrophysics Data System (ADS)

    Hoffman, Loren

    In the favored vacuum energy + cold dark matter (ACDM) cosmology, galaxies form through a hierarchical merging process. Mergers between comparable-mass sys tems are qualitatively different from the ongoing accretion of small objects by much larger ones, in that they can radically transform the nature of the merging objects, e.g. through violent relaxation of the stars and dark matter, triggered starbursts, and quasar activity. This thesis covers two phenomena unique to major galaxy mergers: the formation of supermassive black hole (SMBH) binary and triple systems, and the transformation of the stellar orbit structure through violent relaxation, triggered gas inflow, and star formation. In a major merger, the SMBHs can spiral in and form a bound binary in less than a Hubble time. If the binary lifetime exceeds the typical time between mergers, then triple black hole (BH) systems may form. We study the statistics of close triple-SMBH encounters in galactic nuclei by computing a series of three-body orbits with physically-motivated initial conditions appropriate for giant elliptical galaxies. Our simulations include a smooth background potential consisting of a stellar bulge plus a dark matter halo, drag forces due to gravitational radiation and dynamical friction on the stars and dark matter, and a simple model of the time evolution of the inner density profile under heating and mass ejection by the SMBHs. We find that the binary pair coalesces as a result of repeated close encounters in ~85% of our runs. In about 40% of the runs the lightest BH is left wandering through the galactic halo or escapes the galaxy altogether. The triple systems typically scour out cores with mass deficits ~1-2 times their total mass. The high coalescence rate and prevalence of very high-eccentricity orbits could provide interesting signals for the future Laser Interferometer Space Antenna (LISA). Our study of remnant orbit structure involved 42 disk-disk mergers at various gas fractions, and 10 re-mergers of the 40% gas remnants. All simulations were run using a version of GADGET-2 [173] that included subresolution models of radiative cooling, star formation, and supernova and AGN feedback. The potential was frozen at the last snapshot of each simulation and the orbits of ~50,000 randomly chosen stars were integrated for ~100 dynamical times, and classified based on their Fourier spectra using the algorithm of [30]. The 40% gas remnants were found to be dominated by minor-axis tube orbits in their inner regions, whereas box orbits were the dominant orbit family in the inner parts of the dissipationless disk-disk and remnant-remnant systems. The phase space available to minor-axis tube orbits in even the 5% gas remnants was much larger than that in the dissipationless remnants, but the 5% gas remnants are not fast rotators because these orbits tend to be isotropically distributed at low gas fractions. Some of the remnants show significant minor axis rotation, due to large orientation twists in their outer parts (in the 40% gas remnants) and asymmetrically rotating major-axis tube orbits throughout the remnants (in the re-mergers).

  16. Relationship between 1.5-mile run time, injury risk and training outcome in British Army recruits.

    PubMed

    Hall, Lianne J

    2017-12-01

    1.5-mile run time, as a surrogate measure of aerobic fitness, is associated with musculoskeletal injury (MSI) risk in military recruits. This study aimed to determine if 1.5-mile run times can predict injury risk and attrition rates from phase 1 (initial) training and determine if a link exists between phase 1 and 2 discharge outcomes in British Army recruits. 1.5-mile times from week 1 of initial training and MSI reported during training were retrieved for 3446 male recruits. Run times were examined against injury occurrence and training outcomes for 3050 recruits, using a Binary Logistic Regression and χ 2 analysis. The 1.5-mile run can predict injury risk and phase 1 attrition rates (χ 2 (1)=59.3 p<0.001, χ 2 (1)=66.873 p<0.001). Slower 1.5-mile run times were associated with higher injury occurrence (χ 2 (1)=59.3 p<0.001) and reduced phase 1 ( χ 2 104.609 a p<0.001) and 2 (χ 2 84.978 a p<0.001) success. The 1.5-mile run can be used to guide a future standard that will in turn help reduce injury occurrence and improve training success. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. The influence of microstructure on the probability of early failure in aluminum-based interconnects

    NASA Astrophysics Data System (ADS)

    Dwyer, V. M.

    2004-09-01

    For electromigration in short aluminum interconnects terminated by tungsten vias, the well known "short-line" effect applies. In a similar manner, for longer lines, early failure is determined by a critical value Lcrit for the length of polygranular clusters. Any cluster shorter than Lcrit is "immortal" on the time scale of early failure where the figure of merit is not the standard t50 value (the time to 50% failures), but rather the total probability of early failure, Pcf. Pcf is a complex function of current density, linewidth, line length, and material properties (the median grain size d50 and grain size shape factor σd). It is calculated here using a model based around the theory of runs, which has proved itself to be a useful tool for assessing the probability of extreme events. Our analysis shows that Pcf is strongly dependent on σd, and a change in σd from 0.27 to 0.5 can cause an order of magnitude increase in Pcf under typical test conditions. This has implications for the web-based two-dimensional grain-growth simulator MIT/EmSim, which generates grain patterns with σd=0.27, while typical as-patterned structures are better represented by a σd in the range 0.4 - 0.6. The simulator will consequently overestimate interconnect reliability due to this particular electromigration failure mode.

  18. RESTOP: Retaining External Peripheral State in Intermittently-Powered Sensor Systems

    PubMed Central

    Rodriguez Arreola, Alberto; Balsamo, Domenico

    2018-01-01

    Energy harvesting sensor systems typically incorporate energy buffers (e.g., rechargeable batteries and supercapacitors) to accommodate fluctuations in supply. However, the presence of these elements limits the miniaturization of devices. In recent years, researchers have proposed a new paradigm, transient computing, where systems operate directly from the energy harvesting source and allow computation to span across power cycles, without adding energy buffers. Various transient computing approaches have addressed the challenge of power intermittency by retaining the processor’s state using non-volatile memory. However, no generic approach has yet been proposed to retain the state of peripherals external to the processing element. This paper proposes RESTOP, flexible middleware which retains the state of multiple external peripherals that are connected to a computing element (i.e., a microcontroller) through protocols such as SPI or I2C. RESTOP acts as an interface between the main application and the peripheral, which keeps a record, at run-time, of the transmitted data in order to restore peripheral configuration after a power interruption. RESTOP is practically implemented and validated using three digitally interfaced peripherals, successfully restoring their configuration after power interruptions, imposing a maximum time overhead of 15% when configuring a peripheral. However, this represents an overhead of only 0.82% during complete execution of our typical sensing application, which is substantially lower than existing approaches. PMID:29320441

  19. Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.

    PubMed

    Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A

    2018-05-01

    Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  20. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    PubMed Central

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  1. Lower-volume muscle-damaging exercise protects against high-volume muscle-damaging exercise and the detrimental effects on endurance performance.

    PubMed

    Burt, Dean; Lamb, Kevin; Nicholas, Ceri; Twist, Craig

    2015-07-01

    This study examined whether lower-volume exercise-induced muscle damage (EIMD) performed 2 weeks before high-volume muscle-damaging exercise protects against its detrimental effect on running performance. Sixteen male participants were randomly assigned to a lower-volume (five sets of ten squats, n = 8) or high-volume (ten sets of ten squats, n = 8) EIMD group and completed baseline measurements for muscle soreness, knee extensor torque, creatine kinase (CK), a 5-min fixed-intensity running bout and a 3-km running time-trial. Measurements were repeated 24 and 48 h after EIMD, and the running time-trial after 48 h. Two weeks later, both groups repeated the baseline measurements, ten sets of ten squats and the same follow-up testing (Bout 2). Data analysis revealed increases in muscle soreness and CK and decreases in knee extensor torque 24-48 h after the initial bouts of EIMD. Increases in oxygen uptake [Formula: see text], minute ventilation [Formula: see text] and rating of perceived exertion were observed during fixed-intensity running 24-48 h after EIMD Bout 1. Likewise, time increased and speed and [Formula: see text] decreased during a 3-km running time-trial 48 h after EIMD. Symptoms of EIMD, responses during fixed-intensity and running time-trial were attenuated in the days after the repeated bout of high-volume EIMD performed 2 weeks after the initial bout. This study demonstrates that the protective effect of lower-volume EIMD on subsequent high-volume EIMD is transferable to endurance running. Furthermore, time-trial performance was found to be preserved after a repeated bout of EIMD.

  2. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  3. A remote-sensing, GIS-based approach to identify, characterize, and model spawning habitat for fall-run chum salmon in a sub-arctic, glacially fed river

    USGS Publications Warehouse

    Wirth, Lisa; Rosenberger, Amanda; Prakash, Anupma; Gens, Rudiger; Margraf, F. Joseph; Hamazaki, Toshihide

    2012-01-01

    At northern limits of a species’ distribution, fish habitat requirements are often linked to thermal preferences, and the presence of overwintering habitat. However, logistical challenges and hydrologic processes typical of glacial systems could compromize the identification of these habitats, particularly in large river environments. Our goal was to identify and characterize spawning habitat for fall-run chum salmon Oncorhynchus keta and model habitat selection from spatial distributions of tagged individuals in the Tanana River, Alaska using an approach that combined ground surveys with remote sensing. Models included braiding, sinuosity, ice-free water surface area (indicating groundwater influence), and persistent ice-free water (i.e., consistent presence of ice-free water for a 12-year period according to satellite imagery). Candidate models containing persistent ice-free water were selected as most likely, highlighting the utility of remote sensing for monitoring and identifying salmon habitat in remote areas. A combination of ground and remote surveys revealed spatial and temporal thermal characteristics of these habitats that could have strong biological implications. Persistent ice-free sites identified using synthetic aperture radar appear to serve as core areas for spawning fall chum salmon, and the importance of stability through time suggests a legacy of successful reproductive effort for this homing species. These features would not be captured with a one-visit traditional survey but rather required remote-sensing monitoring of the sites through time.

  4. Effect of cycle run time of backwash and relaxation on membrane fouling removal in submerged membrane bioreactor treating sewage at higher flux.

    PubMed

    Tabraiz, Shamas; Haydar, Sajjad; Sallis, Paul; Nasreen, Sadia; Mahmood, Qaisar; Awais, Muhammad; Acharya, Kishor

    2017-08-01

    Intermittent backwashing and relaxation are mandatory in the membrane bioreactor (MBR) for its effective operation. The objective of the current study was to evaluate the effects of run-relaxation and run-backwash cycle time on fouling rates. Furthermore, comparison of the effects of backwashing and relaxation on the fouling behavior of membrane in high rate submerged MBR. The study was carried out on a laboratory scale MBR at high flux (30 L/m 2 ·h), treating sewage. The MBR was operated at three relaxation operational scenarios by keeping the run time to relaxation time ratio constant. Similarly, the MBR was operated at three backwashing operational scenarios by keeping the run time to backwashing time ratio constant. The results revealed that the provision of relaxation or backwashing at small intervals prolonged the MBR operation by reducing fouling rates. The cake and pores fouling rates in backwashing scenarios were far less as compared to the relaxation scenarios, which proved backwashing a better option as compared to relaxation. The operation time of backwashing scenario (lowest cycle time) was 64.6% and 21.1% more as compared to continuous scenario and relaxation scenario (lowest cycle time), respectively. Increase in cycle time increased removal efficiencies insignificantly, in both scenarios of relaxation and backwashing.

  5. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  6. Attenuation of foot pressure during running on four different surfaces: asphalt, concrete, rubber, and natural grass.

    PubMed

    Tessutti, Vitor; Ribeiro, Ana Paula; Trombini-Souza, Francis; Sacco, Isabel C N

    2012-01-01

    The practice of running has consistently increased worldwide, and with it, related lower limb injuries. The type of running surface has been associated with running injury etiology, in addition other factors, such as the relationship between the amount and intensity of training. There is still controversy in the literature regarding the biomechanical effects of different types of running surfaces on foot-floor interaction. The aim of this study was to investigate the influence of running on asphalt, concrete, natural grass, and rubber on in-shoe pressure patterns in adult recreational runners. Forty-seven adult recreational runners ran twice for 40 m on all four different surfaces at 12 ± 5% km · h(-1). Peak pressure, pressure-time integral, and contact time were recorded by Pedar X insoles. Asphalt and concrete were similar for all plantar variables and pressure zones. Running on grass produced peak pressures 9.3% to 16.6% lower (P < 0.001) than the other surfaces in the rearfoot and 4.7% to 12.3% (P < 0.05) lower in the forefoot. The contact time on rubber was greater than on concrete for the rearfoot and midfoot. The behaviour of rubber was similar to that obtained for the rigid surfaces - concrete and asphalt - possibly because of its time of usage (five years). Running on natural grass attenuates in-shoe plantar pressures in recreational runners. If a runner controls the amount and intensity of practice, running on grass may reduce the total stress on the musculoskeletal system compared with the total musculoskeletal stress when running on more rigid surfaces, such as asphalt and concrete.

  7. Nocturnal to Diurnal Switches with Spontaneous Suppression of Wheel-Running Behavior in a Subterranean Rodent

    PubMed Central

    Tachinardi, Patricia; Tøien, Øivind; Valentinuzzi, Veronica S.; Buck, C. Loren; Oda, Gisele A.

    2015-01-01

    Several rodent species that are diurnal in the field become nocturnal in the lab. It has been suggested that the use of running-wheels in the lab might contribute to this timing switch. This proposition is based on studies that indicate feed-back of vigorous wheel-running on the period and phase of circadian clocks that time daily activity rhythms. Tuco-tucos (Ctenomys aff. knighti) are subterranean rodents that are diurnal in the field but are robustly nocturnal in laboratory, with or without access to running wheels. We assessed their energy metabolism by continuously and simultaneously monitoring rates of oxygen consumption, body temperature, general motor and wheel running activity for several days in the presence and absence of wheels. Surprisingly, some individuals spontaneously suppressed running-wheel activity and switched to diurnality in the respirometry chamber, whereas the remaining animals continued to be nocturnal even after wheel removal. This is the first report of timing switches that occur with spontaneous wheel-running suppression and which are not replicated by removal of the wheel. PMID:26460828

  8. The repeated bout effect of traditional resistance exercises on running performance across 3 bouts.

    PubMed

    Doma, Kenji; Schumann, Moritz; Leicht, Anthony Scott; Heilbronn, Brian Edward; Damas, Felipe; Burt, Dean

    2017-09-01

    This study investigated the repeated bout effect of 3 typical lower body resistance-training sessions on maximal and submaximal effort running performance. Twelve resistance-untrained men (age, 24 ± 4 years; height, 1.81 ± 0.10 m; body mass, 79.3 ± 10.9 kg; peak oxygen uptake, 48.2 ± 6.5 mL·kg -1 ·min -1 ; 6-repetition maximum squat, 71.7 ± 12.2 kg) undertook 3 bouts of resistance-training sessions at 6-repetitions maximum. Countermovement jump (CMJ), lower-body range of motion (ROM), muscle soreness, and creatine kinase (CK) were examined prior to and immediately, 24 h (T24), and 48 h (T48) after each resistance-training bout. Submaximal (i.e., below anaerobic threshold (AT)) and maximal (i.e., above AT) running performances were also conducted at T24 and T48. Most indirect muscle damage markers (i.e., CMJ, ROM, and muscle soreness) and submaximal running performance were significantly improved (P < 0.05; 1.9%) following the third resistance-training bout compared with the second bout. Whilst maximal running performance was also improved following the third bout (P < 0.05; 9.8%) compared with other bouts, the measures were still reduced by 12%-20% versus baseline. However, the increase in CK was attenuated following the second bout (P < 0.05) with no further protection following the third bout (P > 0.05). In conclusion, the initial bout induced the greatest change in CK; however, at least 2 bouts were required to produce protective effects on other indirect muscle damage markers and submaximal running performance measures. This suggests that submaximal running sessions should be avoided for at least 48 h after resistance training until the third bout, although a greater recovery period may be required for maximal running sessions.

  9. Can anti-gravity running improve performance to the same degree as over-ground running?

    PubMed

    Brennan, Christopher T; Jenkins, David G; Osborne, Mark A; Oyewale, Michael; Kelly, Vincent G

    2018-03-11

    This study examined the changes in running performance, maximal blood lactate concentrations and running kinematics between 85%BM anti-gravity (AG) running and normal over-ground (OG) running over an 8-week training period. Fifteen elite male developmental cricketers were assigned to either the AG or over-ground (CON) running group. The AG group (n = 7) ran twice a week on an AG treadmill and once per week over-ground. The CON group (n = 8) completed all sessions OG on grass. Both AG and OG training resulted in similar improvements in time trial and shuttle run performance. Maximal running performance showed moderate differences between the groups, however the AG condition resulted in less improvement. Large differences in maximal blood lactate concentrations existed with OG running resulting in greater improvements in blood lactate concentrations measured during maximal running. Moderate increases in stride length paired with moderate decreases in stride rate also resulted from AG training. The use of AG training to supplement regular OG training for performance should be used cautiously, as extended use over long periods of time could lead to altered stride mechanics and reduced blood lactate.

  10. Exercise in space: the European Space Agency approach to in-flight exercise countermeasures for long-duration missions on ISS.

    PubMed

    Petersen, Nora; Jaekel, Patrick; Rosenberger, Andre; Weber, Tobias; Scott, Jonathan; Castrucci, Filippo; Lambrecht, Gunda; Ploutz-Snyder, Lori; Damann, Volker; Kozlovskaya, Inessa; Mester, Joachim

    2016-01-01

    To counteract microgravity (µG)-induced adaptation, European Space Agency (ESA) astronauts on long-duration missions (LDMs) to the International Space Station (ISS) perform a daily physical exercise countermeasure program. Since the first ESA crewmember completed an LDM in 2006, the ESA countermeasure program has strived to provide efficient protection against decreases in body mass, muscle strength, bone mass, and aerobic capacity within the operational constraints of the ISS environment and the changing availability of on-board exercise devices. The purpose of this paper is to provide a description of ESA's individualised approach to in-flight exercise countermeasures and an up-to-date picture of how exercise is used to counteract physiological changes resulting from µG-induced adaptation. Changes in the absolute workload for resistive exercise, treadmill running and cycle ergometry throughout ESA's eight LDMs are also presented, and aspects of pre-flight physical preparation and post-flight reconditioning outlined. With the introduction of the advanced resistive exercise device (ARED) in 2009, the relative contribution of resistance exercise to total in-flight exercise increased (33-46 %), whilst treadmill running (42-33 %) and cycle ergometry (26-20 %) decreased. All eight ESA crewmembers increased their in-flight absolute workload during their LDMs for resistance exercise and treadmill running (running speed and vertical loading through the harness), while cycle ergometer workload was unchanged across missions. Increased or unchanged absolute exercise workloads in-flight would appear contradictory to typical post-flight reductions in muscle mass and strength, and cardiovascular capacity following LDMs. However, increased absolute in-flight workloads are not directly linked to changes in exercise capacity as they likely also reflect the planned, conservative loading early in the mission to allow adaption to µG exercise, including personal comfort issues with novel exercise hardware (e.g. the treadmill harness). Inconsistency in hardware and individualised support concepts across time limit the comparability of results from different crewmembers, and questions regarding the difference between cycling and running in µG versus identical exercise here on Earth, and other factors that might influence in-flight exercise performance, still require further investigation.

  11. Simplified programming and control of automated radiosynthesizers through unit operations.

    PubMed

    Claggett, Shane B; Quinn, Kevin M; Lazari, Mark; Moore, Melissa D; van Dam, R Michael

    2013-07-15

    Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client-server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client-server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client-server architecture provided robustness and flexibility.

  12. Simplified programming and control of automated radiosynthesizers through unit operations

    PubMed Central

    2013-01-01

    Background Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Methods Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client–server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. Results The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client–server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. Conclusions We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client–server architecture provided robustness and flexibility. PMID:23855995

  13. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  14. Comparison of tobacco control scenarios: quantifying estimates of long-term health impact using the DYNAMO-HIA modeling tool.

    PubMed

    Kulik, Margarete C; Nusselder, Wilma J; Boshuizen, Hendriek C; Lhachimi, Stefan K; Fernández, Esteve; Baili, Paolo; Bennett, Kathleen; Mackenbach, Johan P; Smit, H A

    2012-01-01

    There are several types of tobacco control interventions/policies which can change future smoking exposure. The most basic intervention types are 1) smoking cessation interventions 2) preventing smoking initiation and 3) implementation of a nationwide policy affecting quitters and starters simultaneously. The possibility for dynamic quantification of such different interventions is key for comparing the timing and size of their effects. We developed a software tool, DYNAMO-HIA, which allows for a quantitative comparison of the health impact of different policy scenarios. We illustrate the outcomes of the tool for the three typical types of tobacco control interventions if these were applied in the Netherlands. The tool was used to model the effects of different types of smoking interventions on future smoking prevalence and on health outcomes, comparing these three scenarios with the business-as-usual scenario. The necessary data input was obtained from the DYNAMO-HIA database which was assembled as part of this project. All smoking interventions will be effective in the long run. The population-wide strategy will be most effective in both the short and long term. The smoking cessation scenario will be second-most effective in the short run, though in the long run the smoking initiation scenario will become almost as effective. Interventions aimed at preventing the initiation of smoking need a long time horizon to become manifest in terms of health effects. The outcomes strongly depend on the groups targeted by the intervention. We calculated how much more effective the population-wide strategy is, in both the short and long term, compared to quit smoking interventions and measures aimed at preventing the initiation of smoking. By allowing a great variety of user-specified choices, the DYNAMO-HIA tool is a powerful instrument by which the consequences of different tobacco control policies and interventions can be assessed.

  15. Automated symbolic calculations in nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Kröger, Martin; Hütter, Markus

    2010-12-01

    We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.

  16. Comparison of Tobacco Control Scenarios: Quantifying Estimates of Long-Term Health Impact Using the DYNAMO-HIA Modeling Tool

    PubMed Central

    Kulik, Margarete C.; Nusselder, Wilma J.; Boshuizen, Hendriek C.; Lhachimi, Stefan K.; Fernández, Esteve; Baili, Paolo; Bennett, Kathleen; Mackenbach, Johan P.; Smit, H. A.

    2012-01-01

    Background There are several types of tobacco control interventions/policies which can change future smoking exposure. The most basic intervention types are 1) smoking cessation interventions 2) preventing smoking initiation and 3) implementation of a nationwide policy affecting quitters and starters simultaneously. The possibility for dynamic quantification of such different interventions is key for comparing the timing and size of their effects. Methods and Results We developed a software tool, DYNAMO-HIA, which allows for a quantitative comparison of the health impact of different policy scenarios. We illustrate the outcomes of the tool for the three typical types of tobacco control interventions if these were applied in the Netherlands. The tool was used to model the effects of different types of smoking interventions on future smoking prevalence and on health outcomes, comparing these three scenarios with the business-as-usual scenario. The necessary data input was obtained from the DYNAMO-HIA database which was assembled as part of this project. All smoking interventions will be effective in the long run. The population-wide strategy will be most effective in both the short and long term. The smoking cessation scenario will be second-most effective in the short run, though in the long run the smoking initiation scenario will become almost as effective. Interventions aimed at preventing the initiation of smoking need a long time horizon to become manifest in terms of health effects. The outcomes strongly depend on the groups targeted by the intervention. Conclusion We calculated how much more effective the population-wide strategy is, in both the short and long term, compared to quit smoking interventions and measures aimed at preventing the initiation of smoking. By allowing a great variety of user-specified choices, the DYNAMO-HIA tool is a powerful instrument by which the consequences of different tobacco control policies and interventions can be assessed. PMID:22384230

  17. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  18. 40 CFR Table 1a to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...

  19. 40 CFR Table 1a to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...

  20. Regulation of step frequency in transtibial amputee endurance athletes using a running-specific prosthesis.

    PubMed

    Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han

    2017-01-25

    Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Sex differences in association of race performance, skin-fold thicknesses, and training variables for recreational half-marathon runners.

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Senn, Oliver

    2010-12-01

    The purpose of this study was to investigate the association between selected skin-fold thicknesses and training variables with a half-marathon race time, for both male and female recreational runners, using bi- and multivariate analysis. In 52 men, two skin-fold thicknesses (abdominal and calf) were significantly and positively correlated with race time; whereas in 15 women, five (pectoral, mid-axilla, subscapular, abdominal, and suprailiac) showed positive and significant relations with total race time. In men, the mean weekly running distance, minimum distance run per week, maximum distance run per week, mean weekly hours of running, number of running training sessions per week, and mean speed of the training sessions were significantly and negatively related to total race time, but not in women. Interaction analyses suggested that race time was more strongly associated with anthropometry in women than men. Race time for the women was independently associated with the sum of eight skin-folds; but for the men, only the mean speed during training sessions was independently associated. Skin-fold thicknesses and training variables in these groups were differently related to race time according to their sex.

  2. American Academy of Podiatric Sports Medicine

    MedlinePlus

    ... Runblogger Running Product Reviews Running Research Junkie Running Times The ... © American Academy of Podiatric Sports Medicine Website Design, Maintenance and Hosting by Catalyst Marketing / Worry Free ...

  3. Jumping and hopping in elite and amateur orienteering athletes and correlations to sprinting and running.

    PubMed

    Hébert-Losier, Kim; Jensen, Kurt; Holmberg, Hans-Christer

    2014-11-01

    Jumping and hopping are used to measure lower-body muscle power, stiffness, and stretch-shortening-cycle utilization in sports, with several studies reporting correlations between such measures and sprinting and/or running abilities in athletes. Neither jumping and hopping nor correlations with sprinting and/or running have been examined in orienteering athletes. The authors investigated squat jump (SJ), countermovement jump (CMJ), standing long jump (SLJ), and hopping performed by 8 elite and 8 amateur male foot-orienteering athletes (29 ± 7 y, 183 ± 5 cm, 73 ± 7 kg) and possible correlations to road, path, and forest running and sprinting performance, as well as running economy, velocity at anaerobic threshold, and peak oxygen uptake (VO(2peak)) from treadmill assessments. During SJs and CMJs, elites demonstrated superior relative peak forces, times to peak force, and prestretch augmentation, albeit lower SJ heights and peak powers. Between-groups differences were unclear for CMJ heights, hopping stiffness, and most SLJ parameters. Large pairwise correlations were observed between relative peak and time to peak forces and sprinting velocities; time to peak forces and running velocities; and prestretch augmentation and forest-running velocities. Prestretch augmentation and time to peak forces were moderately correlated to VO(2peak). Correlations between running economy and jumping or hopping were small or trivial. Overall, the elites exhibited superior stretch-shortening-cycle utilization and rapid generation of high relative maximal forces, especially vertically. These functional measures were more closely related to sprinting and/or running abilities, indicating benefits of lower-body training in orienteering.

  4. Reduced step length reduces knee joint contact forces during running following anterior cruciate ligament reconstruction but does not alter inter-limb asymmetry.

    PubMed

    Bowersock, Collin D; Willy, Richard W; DeVita, Paul; Willson, John D

    2017-03-01

    Anterior cruciate ligament reconstruction is associated with early onset knee osteoarthritis. Running is a typical activity following this surgery, but elevated knee joint contact forces are thought to contribute to osteoarthritis degenerative processes. It is therefore clinically relevant to identify interventions to reduce contact forces during running among individuals after anterior cruciate ligament reconstruction. The primary purpose of this study was to evaluate the effect of reducing step length during running on patellofemoral and tibiofemoral joint contact forces among people with a history of anterior cruciate ligament reconstruction. Inter limb knee joint contact force differences during running were also examined. 18 individuals at an average of 54.8months after unilateral anterior cruciate ligament reconstruction ran in 3 step length conditions (preferred, -5%, -10%). Bilateral patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, loading rate, impulse, and impulse per kilometer were evaluated between step length conditions and limbs using separate 2 factor analyses of variance. Reducing step length 5% decreased patellofemoral, tibiofemoral, and medial tibiofemoral compartment peak force, impulse, and impulse per kilometer bilaterally. A 10% step length reduction further decreased peak forces and force impulses, but did not further reduce force impulses per kilometer. Tibiofemoral joint impulse, impulse per kilometer, and patellofemoral joint loading rate were lower in the previously injured limb compared to the contralateral limb. Running with a shorter step length is a feasible clinical intervention to reduce knee joint contact forces during running among people with a history of anterior cruciate ligament reconstruction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Grounded running in quails: simulations indicate benefits of observed fixed aperture angle between legs before touch-down.

    PubMed

    Andrada, Emanuel; Rode, Christian; Blickhan, Reinhard

    2013-10-21

    Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations. © 2013 Elsevier Ltd. All rights reserved.

  6. Evaluating Commercial and Private Cloud Services for Facility-Scale Geodetic Data Access, Analysis, and Services

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.

    2017-12-01

    UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data from a network of 700-plus GPS stations. The evaluation is based on a suite of metrics that we have developed to elucidate the effectiveness of cloud-based services in price, performance, and management. Services are currently running in AWS and evaluation is underway.

  7. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less

  8. 77 FR 60165 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass September 26, 2012. I... FICC proposes to move the time at which its Mortgage-Backed Securities Division (``MBSD'') runs its... processing passes. MBSD currently runs its first processing pass of the day (historically referred to as the...

  9. Model Analysis of the Factors Regulating Trends and Variability of Methane, Carbon Monoxide and OH: 1. Model Validation

    NASA Technical Reports Server (NTRS)

    Elshorbany, Y. F.; Strode, S.; Wang, J.; Duncan, B.

    2014-01-01

    Methane (CH4) is the second most important anthropogenic greenhouse gas (GHG). Its 100-year global warming potential (GWP) is 25 times larger than that for carbon dioxide. The 100-yr integrated GWP of CH4 is sensitive to changes in OH levels. Methane's atmospheric growth rate was estimated to be more than 10 ppb yr(exp -1) in 1998 but less than zero in 2001, 2004 and 2005 (Kirschke et al., 2013). Since 2006, the CH4 is increasing again. This phenomena is yet not well understood. Oxidation of CH4 by OH is the main loss process, thus affecting the oxidizing capacity of the atmosphere and contributing to the global ozone background. Current models typically use an annual cycle of offline OH fields to simulate CH4. The implemented OH fields in these models are typically tuned so that simulated CH4 growth rates match that measured. For future and climate simulations, the OH tuning technique may not be suitable. In addition, running full chemistry, multi-decadal CH4 simulations is a serious challenge and currently, due to computational intensity, almost impossible.

  10. A latest developed all permanent magnet ECRIS for atomic physics research at IMP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, L.T.; Zhao, H.W.; Zhang, Z.M.

    2006-03-15

    Electron cyclotron resonance (ECR) ion sources have been used for atomic physics research for a long time. With the development of atomic physics research in the Institute of Modern Physics (IMP), additional high performance experimental facilities are required. A 300 kV high voltage (HV) platform has been under construction since 2003, and an all permanent magnet ECR ion source is supposed to be put on the platform. Lanzhou all permanent magnet ECR ion source No. 2 (LAPECR2) is a latest developed all permanent magnet ECRIS. It is a 900 kg weight and null-set 650 mmx562 mm outer dimension (magnetic body)more » ion source. The injection magnetic field of the source is 1.28 T and the extraction magnetic field is 1.07 T. This source is designed to be running at 14.5 GHz. The high magnetic field inside the plasma chamber enables the source to give good performances at 14.5 GHz. LAPECR2 source is now under commissioning in IMP. In this article, the typical parameters of the source LAPECR2 are listed, and the typical results of the preliminary commissioning are presented.« less

  11. [Construction of information management-based virtual forest landscape and its application].

    PubMed

    Chen, Chongcheng; Tang, Liyu; Quan, Bing; Li, Jianwei; Shi, Song

    2005-11-01

    Based on the analysis of the contents and technical characteristics of different scale forest visualization modeling, this paper brought forward the principles and technical systems of constructing an information management-based virtual forest landscape. With the combination of process modeling and tree geometric structure description, a software method of interactively and parameterized tree modeling was developed, and the corresponding renderings and geometrical elements simplification algorithms were delineated to speed up rendering run-timely. As a pilot study, the geometrical model bases associated with the typical tree categories in Zhangpu County of Fujian Province, southeast China were established as template files. A Virtual Forest Management System prototype was developed with GIS component (ArcObject), OpenGL graphics environment, and Visual C++ language, based on forest inventory and remote sensing data. The prototype could be used for roaming between 2D and 3D, information query and analysis, and virtual and interactive forest growth simulation, and its reality and accuracy could meet the needs of forest resource management. Some typical interfaces of the system and the illustrative scene cross-sections of simulated masson pine growth under conditions of competition and thinning were listed.

  12. No Breathing in the Aisles: Diesel Exhaust inside School Buses.

    ERIC Educational Resources Information Center

    Solomon, Gina M.; Campbell, Todd R.; Feuer, Gail Ruderman; Masters, Julie; Samkian, Artineh; Paul, Kavita Ann

    There is evidence that diesel exhaust causes cancer and premature death, and also exacerbates asthma and other respiratory illness. Noting that the vast majority of the nation's school buses run on diesel fuel, this report details a study examining the level of diesel exhaust to which children are typically exposed as they travel to and from…

  13. National Affiliation or Local Representation: When TFA Alumni Run for School Board

    ERIC Educational Resources Information Center

    Jacobsen, Rebecca; Linkow, Tamara Wilder

    2014-01-01

    Historically power to govern public schools has been delegated to local school boards. However, this arrangement of power has been shifting over the past half century and increasingly, local school boards are targeted as ineffective and antiquated. Teach For America (TFA), typically examined for its placement of teachers, also seeks to develop…

  14. School Biz: 10 Business Practices to Help Your District Maximize Resources and Run Smoothly

    ERIC Educational Resources Information Center

    Simkins, Michael

    2007-01-01

    Education is not a business! That's a typical response when anyone suggests that public schools should behave more like businesses. While this is true, one indisputable fact about businesses is that inefficiency equals failure. Education stands to benefit from the same survival tactics as its private sector counterparts. This article presents ten…

  15. 60-Hz electric and magnetic fields generated by a distribution network.

    PubMed

    Héroux, P

    1987-01-01

    From a mobile unit, 60-Hz electric and magnetic fields generated by Hydro-Québec's distribution network were measured. Nine runs, representative of various human environments, were investigated. Typical values were 32 V/m and 0.16 microT. The electrical distribution networks investigated were major contributors to the electric and magnetic environments.

  16. Altered Dynamics of the fMRI Response to Faces in Individuals with Autism

    ERIC Educational Resources Information Center

    Kleinhans, Natalia M.; Richards, Todd; Greenson, Jessica; Dawson, Geraldine; Aylward, Elizabeth

    2016-01-01

    Abnormal fMRI habituation in autism spectrum disorders (ASDs) has been proposed as a critical component in social impairment. This study investigated habituation to fearful faces and houses in ASD and whether fMRI measures of brain activity discriminate between ASD and typically developing (TD) controls. Two identical fMRI runs presenting masked…

  17. Specific aspects of contemporary triathlon: implications for physiological analysis and performance.

    PubMed

    Bentley, David J; Millet, Grégoire P; Vleck, Verónica E; McNaughton, Lars R

    2002-01-01

    Triathlon competitions are performed over markedly different distances and under a variety of technical constraints. In 'standard-distance' triathlons involving 1.5km swim, 40km cycling and 10km running, a World Cup series as well as a World Championship race is available for 'elite' competitors. In contrast, 'age-group' triathletes may compete in 5-year age categories at a World Championship level, but not against the elite competitors. The difference between elite and age-group races is that during the cycle stage elite competitors may 'draft' or cycle in a sheltered position; age-group athletes complete the cycle stage as an individual time trial. Within triathlons there are a number of specific aspects that make the physiological demands different from the individual sports of swimming, cycling and running. The physiological demands of the cycle stage in elite races may also differ compared with the age-group format. This in turn may influence performance during the cycle leg and subsequent running stage. Wetsuit use and drafting during swimming (in both elite and age-group races) result in improved buoyancy and a reduction in frontal resistance, respectively. Both of these factors will result in improved performance and efficiency relative to normal pool-based swimming efforts. Overall cycling performance after swimming in a triathlon is not typically affected. However, it is possible that during the initial stages of the cycle leg the ability of an athlete to generate the high power outputs necessary for tactical position changes may be impeded. Drafting during cycling results in a reduction in frontal resistance and reduced energy cost at a given submaximal intensity. The reduced energy expenditure during the cycle stage results in an improvement in running, so an athlete may exercise at a higher percentage of maximal oxygen uptake. In elite triathlon races, the cycle courses offer specific physiological demands that may result in different fatigue responses when compared with standard time-trial courses. Furthermore, it is possible that different physical and physiological characteristics may make some athletes more suited to races where the cycle course is either flat or has undulating sections. An athlete's ability to perform running activity after cycling, during a triathlon, may be influenced by the pedalling frequency and also the physiological demands of the cycle stage. The technical features of elite and age-group triathlons together with the physiological demands of longer distance events should be considered in experimental design, training practice and also performance diagnosis of triathletes.

  18. Lower-body determinants of running economy in male and female distance runners.

    PubMed

    Barnes, Kyle R; Mcguigan, Michael R; Kilding, Andrew E

    2014-05-01

    A variety of training approaches have been shown to improve running economy in well-trained athletes. However, there is a paucity of data exploring lower-body determinants that may affect running economy and account for differences that may exist between genders. Sixty-three male and female distance runners were assessed in the laboratory for a range of metabolic, biomechanical, and neuromuscular measures potentially related to running economy (ml·kg(-1)·min(-1)) at a range of running speeds. At all common test velocities, women were more economical than men (effect size [ES] = 0.40); however, when compared in terms of relative intensity, men had better running economy (ES = 2.41). Leg stiffness (r = -0.80) and moment arm length (r = 0.90) were large-extremely largely correlated with running economy and each other (r = -0.82). Correlations between running economy and kinetic measures (peak force, peak power, and time to peak force) for both genders were unclear. The relationship in stride rate (r = -0.27 to -0.31) was in the opposite direction to that of stride length (r = 0.32-0.49), and the relationship in contact time (r = -0.21 to -0.54) was opposite of that of flight time (r = 0.06-0.74). Although both leg stiffness and moment arm length are highly related to running economy, it seems that no single lower-body measure can completely explain differences in running economy between individuals or genders. Running economy is therefore likely determined from the sum of influences from multiple lower-body attributes.

  19. The Influence of Running on Foot Posture and In-Shoe Plantar Pressures.

    PubMed

    Bravo-Aguilar, María; Gijón-Noguerón, Gabriel; Luque-Suarez, Alejandro; Abian-Vicen, Javier

    2016-03-01

    Running can be considered a high-impact practice, and most people practicing continuous running experience lower-limb injuries. The aim of this study was to determine the influence of 45 min of running on foot posture and plantar pressures. The sample comprised 116 healthy adults (92 men and 24 women) with no foot-related injuries. The mean ± SD age of the participants was 28.31 ± 6.01 years; body mass index, 23.45 ± 1.96; and training time, 11.02 ± 4.22 h/wk. Outcome measures were collected before and after 45 min of running at an average speed of 12 km/h, and included the Foot Posture Index (FPI) and a baropodometric analysis. The results show that foot posture can be modified after 45 min of running. The mean ± SD FPI changed from 6.15 ± 2.61 to 4.86 ± 2.65 (P < .001). Significant decreases in mean plantar pressures in the external, internal, rearfoot, and forefoot edges were found after 45 min of running. Peak plantar pressures in the forefoot decreased after running. The pressure-time integral decreased during the heel strike phase in the internal edge of the foot. In addition, a decrease was found in the pressure-time integral during the heel-off phase in the internal and rearfoot edges. The findings suggest that after 45 min of running, a pronated foot tends to change into a more neutral position, and decreased plantar pressures were found after the run.

  20. Friction and Environmental Sensitivity of Molybdenum Disulfide: Effects of Microstructure

    NASA Astrophysics Data System (ADS)

    Curry, John F.

    For nearly a century, molybdenum disulfide has been employed as a solid lubricant to reduce the friction and wear between surfaces. MoS2 is in a class of unique materials, transition metal dichalcogens (TMDC), that have a single crystal structure forming lamellae that interact via weak van der Waals forces. This dissertation focuses on the link between the microstructure of MoS2 and the energetics of running film formation to reduce friction, and effects of environmental sensitivities on performance. Nitrogen impinged MoS2 films are utilized as a comparator to amorphous PVD deposited MoS2 in many of the studies due to the highly ordered surface parallel basal texture of sprayed films. Comparisons showed that films with a highly ordered structure can reduce high friction behavior during run-in. It is thought that shear induced reorientation of amorphous films contributes to typically high initial friction during run-in. In addition to a reduction in initial friction, highly ordered MoS2 films are shown to be more resistant to penetration from oxidative aging processes. High sensitivity, low-energy ion scattering (HS-LEIS) enabled depth profiles that showed oxidation limited to the first monolayer for ordered films and throughout the depth (4-5 nm) for amorphous films. X-ray photoelectron spectroscopy supported these findings, showing far more oxidation in amorphous films than ordered films. Many of these results show the benefits of a well run-in coating, yet transient increases in initial friction can still be noticed after only 5 - 10 minutes. It was found that the transient return to high initial friction after dwell times past 5 - 10 minutes was not due to adsorbed species such as water, but possibly an effect of basal plane relaxation to a commensurate state. Additional techniques and methods were developed to study the effect of adsorbed water and load on running film formation via spiral orbit XRD studies. Spiral orbit experiments enabled large enough worn areas for study in the XRD. Diffraction patterns for sputtered coatings at high loads (1N) showed more intense signals for surface parallel basal plane representation than lower loads (100mN). Tests run in dry and humid nitrogen (20% RH), however, showed no differences in reorientation of basal planes. Microstructure was found to be an important factor in determining the tribological performance of MoS2 films in a variety of testing conditions and environments. These findings will be useful in developing a mechanistic framework that better understands the energetics of running film formation and how different environments play a role.

  1. Running speed during training and percent body fat predict race time in recreational male marathoners.

    PubMed

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  2. ms2: A molecular simulation tool for thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran

    2011-11-01

    This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.

  3. Distribution, stock composition and timing, and tagging response of wild Chinook Salmon returning to a large, free-flowing river basin

    USGS Publications Warehouse

    Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.

    2014-01-01

    Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.

  4. An atomistic simulation scheme for modeling crystal formation from solution.

    PubMed

    Kawska, Agnieszka; Brickmann, Jürgen; Kniep, Rüdiger; Hochrein, Oliver; Zahn, Dirk

    2006-01-14

    We present an atomistic simulation scheme for investigating crystal growth from solution. Molecular-dynamics simulation studies of such processes typically suffer from considerable limitations concerning both system size and simulation times. In our method this time-length scale problem is circumvented by an iterative scheme which combines a Monte Carlo-type approach for the identification of ion adsorption sites and, after each growth step, structural optimization of the ion cluster and the solvent by means of molecular-dynamics simulation runs. An important approximation of our method is based on assuming full structural relaxation of the aggregates between each of the growth steps. This concept only holds for compounds of low solubility. To illustrate our method we studied CaF2 aggregate growth from aqueous solution, which may be taken as prototypes for compounds of very low solubility. The limitations of our simulation scheme are illustrated by the example of NaCl aggregation from aqueous solution, which corresponds to a solute/solvent combination of very high salt solubility.

  5. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  6. A Lagrangian Approach for Calculating Microsphere Deposition in a One-Dimensional Lung-Airway Model.

    PubMed

    Vaish, Mayank; Kleinstreuer, Clement

    2015-09-01

    Using the open-source software openfoam as the solver, a novel approach to calculate microsphere transport and deposition in a 1D human lung-equivalent trumpet model (TM) is presented. Specifically, for particle deposition in a nonlinear trumpetlike configuration a new radial force has been developed which, along with the regular drag force, generates particle trajectories toward the wall. The new semi-empirical force is a function of any given inlet volumetric flow rate, micron-particle diameter, and lung volume. Particle-deposition fractions (DFs) in the size range from 2 μm to 10 μm are in agreement with experimental datasets for different laminar and turbulent inhalation flow rates as well as total volumes. Typical run times on a single processor workstation to obtain actual total deposition results at comparable accuracy are 200 times less than that for an idealized whole-lung geometry (i.e., a 3D-1D model with airways up to 23rd generation in single-path only).

  7. Automated acoustic localization and call association for vocalizing humpback whales on the Navy's Pacific Missile Range Facility.

    PubMed

    Helble, Tyler A; Ierley, Glenn R; D'Spain, Gerald L; Martin, Stephen W

    2015-01-01

    Time difference of arrival (TDOA) methods for acoustically localizing multiple marine mammals have been applied to recorded data from the Navy's Pacific Missile Range Facility in order to localize and track humpback whales. Modifications to established methods were necessary in order to simultaneously track multiple animals on the range faster than real-time and in a fully automated way, while minimizing the number of incorrect localizations. The resulting algorithms were run with no human intervention at computational speeds faster than the data recording speed on over forty days of acoustic recordings from the range, spanning multiple years. Spatial localizations based on correlating sequences of units originating from within the range produce estimates having a standard deviation typically 10 m or less (due primarily to TDOA measurement errors), and a bias of 20 m or less (due primarily to sound speed mismatch). An automated method for associating units to individual whales is presented, enabling automated humpback song analyses to be performed.

  8. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  9. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  10. COST Action ES1206: Advanced GNSS Tropospheric Products for Monitoring Severe Weather Events and Climate (GNSS4SWEC)

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan; Guerova, Guergana; Dousa, Jan; Dick, Galina; de Haan, Siebren; Pottiaux, Eric; Bock, Olivier; Pacione, Rosa

    2017-04-01

    GNSS is a well established atmospheric observing technique which can accurately sense atmospheric water vapour, the most abundant greenhouse gas, accounting for up to 70% of atmospheric warming. Water vapour is typically under-sampled in modern operational meteorological observing systems and obtaining and exploiting additional high-quality humidity observations is essential to improve weather forecasting and climate monitoring. COST Action ES1206 is a 4-year project, running from 2013 to 2017, which is coordinating the research activities and improved capabilities from concurrent developments in the GNSS, meteorological and climate communities. For the first time, the synergy of multi-GNSS constellations is used to develop new, more advanced tropospheric products, exploiting the full potential of multi-GNSS on a wide range of temporal and spatial scales - from real-time products monitoring and forecasting severe weather, to the highest quality post-processed products suitable for climate research. The Action also promotes the use of meteorological data as an input to real-time GNSS services and is stimulating the transfer of knowledge and data throughout Europe and beyond.

  11. Running with a minimalist shoe increases plantar pressure in the forefoot region of healthy female runners.

    PubMed

    Bergstra, S A; Kluitenberg, B; Dekker, R; Bredeweg, S W; Postema, K; Van den Heuvel, E R; Hijmans, J M; Sobhani, S

    2015-07-01

    Minimalist running shoes have been proposed as an alternative to barefoot running. However, several studies have reported cases of forefoot stress fractures after switching from standard to minimalist shoes. Therefore, the aim of the current study was to investigate the differences in plantar pressure in the forefoot region between running with a minimalist shoe and running with a standard shoe in healthy female runners during overground running. Randomized crossover design. In-shoe plantar pressure measurements were recorded from eighteen healthy female runners. Peak pressure, maximum mean pressure, pressure time integral and instant of peak pressure were assessed for seven foot areas. Force time integral, stride time, stance time, swing time, shoe comfort and landing type were assessed for both shoe types. A linear mixed model was used to analyze the data. Peak pressure and maximum mean pressure were higher in the medial forefoot (respectively 13.5% and 7.46%), central forefoot (respectively 37.5% and 29.2%) and lateral forefoot (respectively 37.9% and 20.4%) for the minimalist shoe condition. Stance time was reduced with 3.81%. No relevant differences in shoe comfort or landing strategy were found. Running with a minimalist shoe increased plantar pressure without a change in landing pattern. This increased pressure in the forefoot region might play a role in the occurrence of metatarsal stress fractures in runners who switched to minimalist shoes and warrants a cautious approach to transitioning to minimalist shoe use. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Effect of a prior intermittent run at vVO2max on oxygen kinetics during an all-out severe run in humans.

    PubMed

    Billat, V L; Bocquet, V; Slawinski, J; Laffite, L; Demarle, A; Chassaing, P; Koralsztein, J P

    2000-09-01

    The purpose of this study was to examine the influence of prior intermittent running at VO2max on oxygen kinetics during a continuous severe intensity run and the time spent at VO2max. Eight long-distance runners performed three maximal tests on a synthetic track (400 m) whilst breathing through the COSMED K4 portable telemetric metabolic analyser: i) an incremental test which determined velocity at the lactate threshold (vLT), VO2max and velocity associated with VO2max (vVO2max), ii) a continuous severe intensity run at vLT+50% (vdelta50) of the difference between vLT and vVO2max (91.3+/-1.6% VO2max)preceded by a light continuous 20 minute run at 50% of vVO2max (light warm-up), iii) the same continuous severe intensity run at vdelta50 with a prior interval training exercise (hard warm-up) of repeated hard running bouts performed at 100% of vVO2max and light running at 50% of vVO2max (of 30 seconds each) performed until exhaustion (on average 19+/-5 min with 19+/-5 interval repetitions). This hard warm-up speeded the VO2 kinetics: the time constant was reduced by 45% (28+/-7 sec vs 51+/-37 sec) and the slow component of VO2 (deltaVO2 6-3 min) was deleted (-143+/-271 ml x min(-1) vs 291+/-153 ml x min(-1)). In conclusion, despite a significantly lower total run time at vdelta50 (6 min 19+/-0) min 17 vs 8 min 20+/-1 min 45, p=0.02) after the intermittent warm-up at VO2max, the time spent specifically at VO2max in the severe continuous run at vdelta50 was not significantly different.

  13. An automated metrics system to measure and improve the success of laboratory automation implementation.

    PubMed

    Benn, Neil; Turlais, Fabrice; Clark, Victoria; Jones, Mike; Clulow, Stephen

    2007-03-01

    The authors describe a system for collecting usage metrics from widely distributed automation systems. An application that records and stores usage data centrally, calculates run times, and charts the data was developed. Data were collected over 20 months from at least 28 workstations. The application was used to plot bar charts of date versus run time for individual workstations, the automation in a specific laboratory, or automation of a specified type. The authors show that revised user training, redeployment of equipment, and running complimentary processes on one workstation can increase the average number of runs by up to 20-fold and run times by up to 450%. Active monitoring of usage leads to more effective use of automation. Usage data could be used to determine whether purchasing particular automation was a good investment.

  14. Stride-to-stride variability and complexity between novice and experienced runners during a prolonged run at anaerobic threshold speed.

    PubMed

    Mo, Shiwei; Chow, Daniel H K

    2018-05-19

    Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.

  15. Learned helplessness is independent of levels of brain-derived neurotrophic factor in the hippocampus

    PubMed Central

    Greenwood, Benjamin N.; Strong, Paul V.; Foley, Teresa E.; Thompson, Robert; Fleshner, Monika

    2007-01-01

    Reduced levels of brain-derived neurotrophic factor (BDNF) in the hippocampus have been implicated in human affective disorders and behavioral stress responses. The current studies examined the role of BDNF in the behavioral consequences of inescapable stress, or learned helplessness. Inescapable stress decreased BDNF mRNA and protein in the hippocampus of sedentary rats. Rats allowed voluntary access to running wheels for either 3 or 6 weeks prior to exposure to stress were protected against stress-induced reductions of hippocampal BDNF protein. The observed prevention of stress-induced deceases in BDNF, however, occurred in a time course inconsistent with the prevention of learned helplessness by wheel running, which is evident following 6 weeks, but not 3 weeks, of wheel running. BDNF suppression in physically active rats was produced by administering a single injection of the selective serotonin reuptake inhibitor fluoxetine (10 mg/kg) just prior to stress. Despite reduced levels of hippocampal BDNF mRNA following stress, physically active rats given the combination of fluoxetine and stress remained resistant against learned helplessness. Sedentary rats given both fluoxetine and stress still demonstrated typical learned helplessness behaviors. Fluoxetine by itself reduced BDNF mRNA in sedentary rats only, but did not affect freezing or escape learning 24 hours later. Finally, bilateral injections of BDNF (1 μg) into the dentate gyrus prior to stress prevented stress-induced reductions of hippocampal BDNF but did not prevent learned helplessness in sedentary rats. These data indicate that learned helplessness behaviors are independent of the presence or absence of hippocampal BDNF because blocking inescapable stress-induced BDNF suppression does not always prevent learned helplessness, and learned helplessness does not always occur in the presence of reduced BDNF. Results also suggest that the prevention of stress-induced hippocampal BDNF suppression is not necessary for the protective effect of wheel running against learned helplessness. PMID:17161541

  16. Learned helplessness is independent of levels of brain-derived neurotrophic factor in the hippocampus.

    PubMed

    Greenwood, B N; Strong, P V; Foley, T E; Thompson, R S; Fleshner, M

    2007-02-23

    Reduced levels of brain-derived neurotrophic factor (BDNF) in the hippocampus have been implicated in human affective disorders and behavioral stress responses. The current studies examined the role of BDNF in the behavioral consequences of inescapable stress, or learned helplessness. Inescapable stress decreased BDNF mRNA and protein in the hippocampus of sedentary rats. Rats allowed voluntary access to running wheels for either 3 or 6 weeks prior to exposure to stress were protected against stress-induced reductions of hippocampal BDNF protein. The observed prevention of stress-induced deceases in BDNF, however, occurred in a time course inconsistent with the prevention of learned helplessness by wheel running, which is evident following 6 weeks, but not 3 weeks, of wheel running. BDNF suppression in physically active rats was produced by administering a single injection of the selective serotonin reuptake inhibitor fluoxetine (10 mg/kg) just prior to stress. Despite reduced levels of hippocampal BDNF mRNA following stress, physically active rats given the combination of fluoxetine and stress remained resistant against learned helplessness. Sedentary rats given both fluoxetine and stress still demonstrated typical learned helplessness behaviors. Fluoxetine by itself reduced BDNF mRNA in sedentary rats only, but did not affect freezing or escape learning 24 h later. Finally, bilateral injections of BDNF (1 mug) into the dentate gyrus prior to stress prevented stress-induced reductions of hippocampal BDNF but did not prevent learned helplessness in sedentary rats. These data indicate that learned helplessness behaviors are independent of the presence or absence of hippocampal BDNF because blocking inescapable stress-induced BDNF suppression does not always prevent learned helplessness, and learned helplessness does not always occur in the presence of reduced BDNF. Results also suggest that the prevention of stress-induced hippocampal BDNF suppression is not necessary for the protective effect of wheel running against learned helplessness.

  17. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    PubMed

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  18. Reliability of Vibrating Mesh Technology.

    PubMed

    Gowda, Ashwin A; Cuccia, Ann D; Smaldone, Gerald C

    2017-01-01

    For delivery of inhaled aerosols, vibrating mesh systems are more efficient than jet nebulizers are and do not require added gas flow. We assessed the reliability of a vibrating mesh nebulizer (Aerogen Solo, Aerogen Ltd, Galway Ireland) suitable for use in mechanical ventilation. An initial observational study was performed with 6 nebulizers to determine run time and efficiency using normal saline and distilled water. Nebulizers were run until cessation of aerosol production was noted, with residual volume and run time recorded. Three controllers were used to assess the impact of the controller on nebulizer function. Following the observational study, a more detailed experimental protocol was performed using 20 nebulizers. For this analysis, 2 controllers were used, and time to cessation of aerosol production was noted. Gravimetric techniques were used to measure residual volume. Total nebulization time and residual volume were recorded. Failure was defined as premature cessation of aerosol production represented by residual volume of > 10% of the nebulizer charge. In the initial observational protocol, an unexpected sporadic failure rate was noted of 25% in 55 experimental runs. In the experimental protocol, a failure rate was noted of 30% in 40 experimental runs. Failed runs in the experimental protocol exhibited a wide range of retained volume averaging ± SD 36 ± 21.3% compared with 3.2 ± 1.5% (P = .001) in successful runs. Small but significant differences existed in nebulization time between controllers. Aerogen Solo nebulization was often randomly interrupted with a wide range of retained volumes. Copyright © 2017 by Daedalus Enterprises.

  19. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  20. Use of Flexible Body Coupled Loads in Assessment of Day of Launch Flight Loads

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Yunis, Isam; Olds, Aaron D.

    2011-01-01

    A Day of Launch flight loads assessment technique that determines running loads calculated from flexible body coupled loads was developed for the Ares I-X Flight Test Vehicle. The technique was developed to quantify DOL flight loads in terms of structural load components rather than the typically used q-alpha metric to provide more insight into the DOL loads. In this technique, running loads in the primary structure are determined from the combination of quasi-static aerodynamic loads and dynamic loads. The aerodynamic loads are calculated as a function of time using trajectory parameters passed from the DOL trajectory simulation and are combined with precalculated dynamic loads using a load combination equation. The potential change in aerodynamic load due to wind variability during the countdown is included in the load combination. In the event of a load limit exceedance, the technique allows the identification of what load component is exceeded, a quantification of how much the load limit is exceeded, and where on the vehicle the exceedance occurs. This technique was used to clear the Ares I-X FTV for launch on October 28, 2009. This paper describes the use of coupled loads in the Ares I-X flight loads assessment and summarizes the Ares I-X load assessment results.

  1. Influence of the world's most challenging mountain ultra-marathon on energy cost and running mechanics.

    PubMed

    Vernillo, Gianluca; Savoldelli, Aldo; Zignoli, Andrea; Trabucchi, Pietro; Pellegrini, Barbara; Millet, Grégoire P; Schena, Federico

    2014-05-01

    To examine the effects of the world's most challenging mountain ultra-marathon (Tor des Géants(®) 2012) on the energy cost of three types of locomotion (cycling, level and uphill running) and running kinematics. Before (pre-) and immediately after (post-) the competition, a group of ten male experienced ultra-marathon runners performed in random order three submaximal 4-min exercise trials: cycling at a power of 1.5 W kg(-1) body mass; level running at 9 km h(-1) and uphill running at 6 km h(-1) at an inclination of +15 % on a motorized treadmill. Two video cameras recorded running mechanics at different sampling rates. Between pre- and post-, the uphill-running energy cost decreased by 13.8 % (P = 0.004); no change was noted in the energy cost of level running or cycling (NS). There was an increase in contact time (+10.3 %, P = 0.019) and duty factor (+8.1 %, P = 0.001) and a decrease in swing time (-6.4 %, P = 0.008) in the uphill-running condition. After this extreme mountain ultra-marathon, the subjects modified only their uphill-running patterns for a more economical step mechanics.

  2. Agreement between VO[subscript 2peak] Predicted from PACER and One-Mile Run Time-Equated Laps

    ERIC Educational Resources Information Center

    Saint-Maurice, Pedro F.; Anderson, Katelin; Bai, Yang; Welk, Gregory J.

    2016-01-01

    Purpose: This study examined the agreement between estimated peak oxygen consumption (VO[subscript 2peak]) obtained from the Progressive Aerobic Cardiovascular Endurance Run (PACER) fitness test and equated PACER laps derived from One-Mile Run time (MR). Methods: A sample of 680 participants (324 boys and 356 girls) in Grades 7 through 12…

  3. The Reliability of a 5km Run Test on a Motorized Treadmill

    ERIC Educational Resources Information Center

    Driller, Matthew; Brophy-Williams, Ned; Walker, Anthony

    2017-01-01

    The purpose of the present study was to determine the reliability of a 5km run test on a motorized treadmill. Over three consecutive weeks, 12 well-trained runners completed three 5km time trials on a treadmill following a standardized warm-up. Runners were partially-blinded to their running speed and distance covered. Total time to complete the…

  4. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...

  5. Critical Velocity Is Associated With Combat-Specific Performance Measures in a Special Forces Unit.

    PubMed

    Hoffman, Mattan W; Stout, Jeffrey R; Hoffman, Jay R; Landua, Geva; Fukuda, David H; Sharvit, Nurit; Moran, Daniel S; Carmon, Erez; Ostfeld, Ishay

    2016-02-01

    The purpose of this study was to examine the relationship between critical velocity (CV) and anaerobic distance capacity (ADC) to combat-specific tasks (CST) in a special forces (SFs) unit. Eighteen male soldiers (mean ± SD; age: 19.9 ± 0.8 years; height: 177.6 ± 6.6 cm; body mass: 74.1 ± 5.8 kg; body mass index [BMI]: 23.52 ± 1.63) from an SF unit of the Israel Defense Forces volunteered to complete a 3-minute all-out run along with CST (2.5-km run, 50-m casualty carry, and 30-m repeated sprints with "rush" shooting [RPTDS]). Estimates of CV and ADC from the 3-minute all-out run were determined from data downloaded from a global position system device worn by each soldier, with CV calculated as the average velocity of the final 30 seconds of the run and ADC as the velocity-time integral above CV. Critical velocity exhibited significant negative correlations with the 2.5-km run time (r = -0.62, p < 0.01) and RPTDS time (r = -0.71, p < 0.01). In addition, CV was positively correlated with the average velocity during the 2.5-km run (r = 0.64, p < 0.01). Stepwise regression identified CV as the most significant performance measure associated with the 2.5-km run time, whereas BMI and CV measures were significant predictors of RPTDS time (R(2) = 0.67, p ≤ 0.05). Using the 3-minute all-out run as a testing measurement in combat, personnel may offer a more efficient and simpler way in assessing both aerobic and anaerobic capabilities (CV and ADC) within a relatively large sample.

  6. An upgraded version of the generator BCVEGPY2.0 for hadronic production of B meson and its excited states

    NASA Astrophysics Data System (ADS)

    Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang

    2006-11-01

    An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1

  7. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  8. Turtle: identifying frequent k-mers with cache-efficient algorithms.

    PubMed

    Roy, Rajat Shuvro; Bhattacharya, Debashish; Schliep, Alexander

    2014-07-15

    Counting the frequencies of k-mers in read libraries is often a first step in the analysis of high-throughput sequencing data. Infrequent k-mers are assumed to be a result of sequencing errors. The frequent k-mers constitute a reduced but error-free representation of the experiment, which can inform read error correction or serve as the input to de novo assembly methods. Ideally, the memory requirement for counting should be linear in the number of frequent k-mers and not in the, typically much larger, total number of k-mers in the read library. We present a novel method that balances time, space and accuracy requirements to efficiently extract frequent k-mers even for high-coverage libraries and large genomes such as human. Our method is designed to minimize cache misses in a cache-efficient manner by using a pattern-blocked Bloom filter to remove infrequent k-mers from consideration in combination with a novel sort-and-compact scheme, instead of a hash, for the actual counting. Although this increases theoretical complexity, the savings in cache misses reduce the empirical running times. A variant of method can resort to a counting Bloom filter for even larger savings in memory at the expense of false-negative rates in addition to the false-positive rates common to all Bloom filter-based approaches. A comparison with the state-of-the-art shows reduced memory requirements and running times. The tools are freely available for download at http://bioinformatics.rutgers.edu/Software/Turtle and http://figshare.com/articles/Turtle/791582. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Monitoring data transfer latency in CMS computing operations

    DOE PAGES

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...

    2015-12-23

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  10. Monitoring data transfer latency in CMS computing operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo

    During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less

  11. Spectroscopic analysis technique for arc-welding process control

    NASA Astrophysics Data System (ADS)

    Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel

    2005-09-01

    The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.

  12. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  13. Changes in Contributions of Swimming, Cycling, and Running Performances on Overall Triathlon Performance Over a 26-Year Period.

    PubMed

    Figueiredo, Pedro; Marques, Elisa A; Lepers, Romuald

    2016-09-01

    Figueiredo, P, Marques, EA, and Lepers, R. Changes in contributions of swimming, cycling, and running performances on overall triathlon performance over a 26-year period. J Strength Cond Res 30(9): 2406-2415, 2016-This study examined the changes in the individual contribution of each discipline to the overall performance of Olympic and Ironman distance triathlons among men and women. Between 1989 and 2014, overall performances and their component disciplines (swimming, cycling and running) were analyzed from the top 50 overall male and female finishers. Regression analyses determined that for the Olympic distance, the split times in swimming and running decreased over the years (r = 0.25-0.43, p ≤ 0.05), whereas the cycling split and total time remained unchanged (p > 0.05), for both sexes. For the Ironman distance, the cycling and running splits and the total time decreased (r = 0.19-0.88, p ≤ 0.05), whereas swimming time remained stable, for both men and women. The average contribution of the swimming stage (∼18%) was smaller than the cycling and running stages (p ≤ 0.05), for both distances and both sexes. Running (∼47%) and then cycling (∼36%) had the greatest contribution to overall performance for the Olympic distance (∼47%), whereas for the Ironman distance, cycling and running presented similar contributions (∼40%, p > 0.05). Across the years, in the Olympic distance, swimming contribution significantly decreased for women and men (r = 0.51 and 0.68, p < 0.001, respectively), whereas running increased for men (r = 0.33, p = 0.014). In the Ironman distance, swimming and cycling contributions changed in an undulating fashion, being inverse between the two segments, for both sexes (p < 0.01), whereas running contribution decreased for men only (r = 0.61, p = 0.001). These findings highlight that strategies to improve running performance should be the main focus on the preparation to compete in the Olympic distance; whereas, in the Ironman, both cycling and running are decisive and should be well developed.

  14. Reliability and validity of pressure and temporal parameters recorded using a pressure-sensitive insole during running.

    PubMed

    Mann, Robert; Malisoux, Laurent; Brunner, Roman; Gette, Paul; Urhausen, Axel; Statham, Andrew; Meijer, Kenneth; Theisen, Daniel

    2014-01-01

    Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to test its validity regarding temporal parameters describing running gait. Furthermore, reliability of the Runalyser measurements was evaluated, as well as its ability to discriminate different running styles. Thirty-one healthy participants (30.3 ± 7.4 years, 1.78 ± 0.10 m and 74.1 ± 12.1 kg) were involved in the different study parts. Eleven participants were instructed to use a rearfoot (RFS), midfoot (MFS) and forefoot (FFS) strike pattern while running on a treadmill. Strike pattern was subsequently defined using a linear regression (R(2)=0.89) between foot strike angle, as determined by motion analysis (1000 Hz), and strike index (SI, point of contact on the foot sole, as a percentage of foot sole length), as measured by the Runalyser. MFS was defined by the 95% confidence interval of the intercept (SI=43.9-49.1%). High agreement (overall mean difference 1.2%) was found between stance time, flight time, stride time and duty factor as determined by the Runalyser and a force-measuring treadmill (n=16 participants). Measurements of the two devices were highly correlated (R ≥ 0.80) and not significantly different. Test-retest intra-class correlation coefficients for all parameters were ≥ 0.94 (n=14 participants). Significant differences (p<0.05) between FFS, RFS and habitual running were detected regarding SI, stance time and stride time (n=24 participants). The Runalyser is suitable for, and easily applicable in large-scale studies on running biomechanics. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Transitionless driving on adiabatic search algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less

  16. The Validity and Reliability of an iPhone App for Measuring Running Mechanics.

    PubMed

    Balsalobre-Fernández, Carlos; Agopyan, Hovannes; Morin, Jean-Benoit

    2017-07-01

    The purpose of this investigation was to analyze the validity of an iPhone application (Runmatic) for measuring running mechanics. To do this, 96 steps from 12 different runs at speeds ranging from 2.77-5.55 m·s -1 were recorded simultaneously with Runmatic, as well as with an opto-electronic device installed on a motorized treadmill to measure the contact and aerial time of each step. Additionally, several running mechanics variables were calculated using the contact and aerial times measured, and previously validated equations. Several statistics were computed to test the validity and reliability of Runmatic in comparison with the opto-electronic device for the measurement of contact time, aerial time, vertical oscillation, leg stiffness, maximum relative force, and step frequency. The running mechanics values obtained with both the app and the opto-electronic device showed a high degree of correlation (r = .94-.99, p < .001). Moreover, there was very close agreement between instruments as revealed by the ICC (2,1) (ICC = 0.965-0.991). Finally, both Runmatic and the opto-electronic device showed almost identical reliability levels when measuring each set of 8 steps for every run recorded. In conclusion, Runmatic has been proven to be a highly reliable tool for measuring the running mechanics studied in this work.

  17. Examining the Impacts of High-Resolution Land Surface Initialization on Model Predictions of Convection in the Southeastern U.S.

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Kumar, Sujay V.; Santos, Pablo; Medlin, Jeffrey M.; Jedlovec, Gary J.

    2009-01-01

    One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within physics parameterizations, model resolution limitations, as well as uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture and temperature, ground fluxes, and vegetation are necessary to better simulate the interactions between the land surface and atmosphere, and ultimately improve predictions of local circulations and summertime pulse convection. The NASA Short-term Prediction Research and Transition (SPORT) Center has been conducting studies to examine the impacts of high-resolution land surface initialization data generated by offline simulations of the NASA Land Informatiot System (LIS) on subsequent numerical forecasts using the Weather Research and Forecasting (WRF) model (Case et al. 2008, to appear in the Journal of Hydrometeorology). Case et al. presents improvements to simulated sea breezes and surface verification statistics over Florida by initializing WRF with land surface variables from an offline LIS spin-up run, conducted on the exact WRF domain and resolution. The current project extends the previous work over Florida, focusing on selected case studies of typical pulse convection over the southeastern U.S., with an emphasis on improving local short-term WRF simulations over the Mobile, AL and Miami, FL NWS county warning areas. Future efforts may involve examining the impacts of assimilating remotely-sensed soil moisture data, and/or introducing weekly greenness vegetation fraction composites (as opposed to monthly climatologies) into ol'fline NASA LIS runs. Based on positive impacts, the offline LIS runs could be transitioned into an operational mode, providing land surface initialization data to NWS forecast offices in real time.

  18. Effects of surface removal on rolling-element fatigue

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.

    1987-01-01

    The Lundberg-Palmgren equation was modified to show the effect on rolling-element fatigue life of removing by grinding a portion of the stressed volume of the raceways of a rolling-element bearing. Results of this analysis show that depending on the amount of material removed, and depending on the initial running time of the bearing when material removal occurs, the 10-percent life of the reground bearings ranges from 74 to 100 percent of the 10-percent life of a brand new bearing. Three bearing types were selected for testing. A total of 250 bearings were reground. Of this matter, 30 bearings from each type were endurance tested to 1600 hr. No bearing failure occurred related to material removal. Two bearing failures occurred due to defective rolling elements and were typical of those which may occur in new bearings.

  19. Optical Design Using Small Dedicated Computers

    NASA Astrophysics Data System (ADS)

    Sinclair, Douglas C.

    1980-09-01

    Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.

  20. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  1. Study of Rubber Composites with Positron Doppler Broadening Spectroscopy: Consideration of Counting Rate

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Quarles, C. A.

    2007-10-01

    We have used positron Doppler Broadening Spectroscopy (DBS) to investigate the uniformity of rubber-carbon black composite samples. The amount of carbon black added to a rubber sample is characterized by phr, the number of grams of carbon black per hundred grams of rubber. Typical concentrations in rubber tires are 50 phr. It has been shown that the S parameter measured by DBS depends on the phr of the sample, so the variation in carbon black concentration can be easily measured to 0.5 phr. In doing the experiments we observed a dependence of the S parameter on small variation in the counting rate or deadtime. By carefully calibrating this deadtime correction we can significantly reduce the experimental run time and thus make faster determination of the uniformity of extended samples.

  2. Investigation of hit-and-run crash occurrence and severity using real-time loop detector data and hierarchical Bayesian binary logit model with random effects.

    PubMed

    Xie, Meiquan; Cheng, Wen; Gill, Gurdiljot Singh; Zhou, Jiao; Jia, Xudong; Choi, Simon

    2018-02-17

    Most of the extensive research dedicated to identifying the influential factors of hit-and-run (HR) crashes has utilized typical maximum likelihood estimation binary logit models, and none have employed real-time traffic data. To fill this gap, this study focused on investigating factors contributing to HR crashes, as well as the severity levels of HR. This study analyzed 4-year crash and real-time loop detector data by employing hierarchical Bayesian models with random effects within a sequential logit structure. In addition to evaluation of the impact of random effects on model fitness and complexity, the prediction capability of the models was examined. Stepwise incremental sensitivity and specificity were calculated and receiver operating characteristic (ROC) curves were utilized to graphically illustrate the predictive performance of the model. Among the real-time flow variables, the average occupancy and speed from the upstream detector were observed to be positively correlated with HR crash possibility. The average upstream speed and speed difference between upstream and downstream speeds were correlated with the occurrence of severe HR crashes. In addition to real-time factors, other variables found influential for HR and severe HR crashes were length of segment, adverse weather conditions, dark lighting conditions with malfunctioning street lights, driving under the influence of alcohol, width of inner shoulder, and nighttime. This study suggests the potential traffic conditions of HR and severe HR occurrence, which refer to relatively congested upstream traffic conditions with high upstream speed and significant speed deviations on long segments. The above findings suggest that traffic enforcement should be directed toward mitigating risky driving under the aforementioned traffic conditions. Moreover, enforcement agencies may employ alcohol checkpoints to counter driving under the influence (DUI) at night. With regard to engineering improvements, wider inner shoulders may be constructed to potentially reduce HR cases and street lights should be installed and maintained in working condition to make roads less prone to such crashes.

  3. Effect of 8 weeks of concurrent plyometric and running training on spatiotemporal and physiological variables of novice runners.

    PubMed

    Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christopher; García-López, Juan

    2018-03-01

    Concurrent plyometric and running training has the potential to improve running economy (RE) and performance through increasing muscle strength and power, but the possible effect on spatiotemporal parameters of running has not been studied yet. The aim of this study was to compare the effect of 8 weeks of concurrent plyometric and running training on spatiotemporal parameters and physiological variables of novice runners. Twenty-five male participants were randomly assigned into two training groups; running group (RG) (n = 11) and running + plyometric group (RPG) (n = 14). Both groups performed 8 weeks of running training programme, and only the RPG performed a concurrent plyometric training programme (two sessions per week). Anthropometric, physiological (VO 2max , heart rate and RE) and spatiotemporal variables (contact and flight times, step rate and length) were registered before and after the intervention. In comparison to RG, the RPG reduced step rate and increased flight times at the same running speeds (P < .05) while contact times remained constant. Significant increases in pre- and post-training (P < .05) were found in RPG for squat jump and 5 bound test, while RG remained unchanged. Peak speed, ventilatory threshold (VT) speed and respiratory compensation threshold (RCT) speed increased (P < .05) for both groups, although peak speed and VO 2max increased more in the RPG than in the RG. In conclusion, concurrent plyometric and running training entails a reduction in step rate, as well as increases in VT speed, RCT speed, peak speed and VO 2max . Athletes could benefit from plyometric training in order to improve their strength, which would contribute to them attaining higher running speeds.

  4. Toward real-time performance benchmarks for Ada

    NASA Technical Reports Server (NTRS)

    Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy

    1986-01-01

    The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.

  5. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    PubMed

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  6. The Impact of a Food Elimination Diet on Collegiate Athletes' 300-meter Run Time and Concentration

    PubMed Central

    Breshears, Karen; Baker, David McA.

    2014-01-01

    Background: Optimal human function and performance through diet strategies are critical for everyone but especially for those involved in collegiate or professional athletics. Currently, individualized medicine (IM) is emerging as a more efficacious approach to health with emphasis on personalized diet strategies for the public and is common practice for elite athletes. One method for directing patient-specific foods in the diet, while concomitantly impacting physical performance, may be via IgG food sensitivity and Candida albicans analysis from dried blood spot (DBS) collections. Methods: The authors designed a quasi-experimental, nonrandomized, pilot study without a control group. Twenty-three participants, 15 female, 8 male, from soccer/volleyball and football athletic teams, respectively, mean age 19.64+0.86 years, were recruited for the study, which examined preposttest 300-meter run times and questionnaire responses after a 14-day IgG DBS–directed food elimination diet based on IgG reactivity to 93 foods. DBS specimen collection, 300-meter run times, and Learning Difficulties Assessment (LDA) questionnaires were collected at the participants' university athletics building on campus. IgG, C albicans, and S cerevisiae analyses were conducted at the Great Plains Laboratory, Lenexa, Kansas. Results: Data indicated a change in 300-meter run time but not of statistical significance (run time baseline mean=50.41 sec, run time intervention mean=50.14 sec). Descriptive statistics for frequency of responses and chi-square analysis revealed that 4 of the 23 items selected from the LDA (Listening-Memory and Concentration subscale R=.8669; Listening-Information Processing subscale R=.8517; and General Concentration and Memory subscale R=.9019) were improved posttest. Conclusion: The study results did not indicate merit in eliminating foods based on IgG reactivity for affecting athletic performance (faster 300-meter run time) but did reveal potential for affecting academic qualities of listening, information processing, concentration, and memory. Further studies are warranted evaluating IgG-directed food elimination diets for improving run time, concentration, and memory among college athletes as well as among other populations. PMID:25568830

  7. Changes in Running Mechanics During a 6-Hour Running Race.

    PubMed

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P < .05), while t c increased after 4 h 30 min of running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P < .05). Finally, SL decreased significantly (-5.1%, P = .010) during the last hour of the race. Most changes occurred after 4 h continuous self-paced running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  8. Differences in in vivo muscle fascicle and tendinous tissue behavior between the ankle plantarflexors during running.

    PubMed

    Lai, A K M; Lichtwark, G A; Schache, A G; Pandy, M G

    2018-03-30

    The primary human ankle plantarflexors, soleus (SO), medial gastrocnemius (MG), and lateral gastrocnemius (LG) are typically regarded as synergists and play a critical role in running. However, due to differences in muscle-tendon architecture and joint articulation, the muscle fascicles and tendinous tissue of the plantarflexors may exhibit differences in their behavior and interactions during running. We combined in vivo dynamic ultrasound measurements with inverse dynamics analyses to identify and explain differences in muscle fascicle, muscle-tendon unit, and tendinous tissue behavior of the primary ankle plantarflexors across a range of steady-state running speeds. Consistent with their role as a force generator, the muscle fascicles of the uniarticular SO shortened less rapidly than the fascicles of the MG during early stance. Furthermore, the MG and LG exhibited delays in tendon recoil during the stance phase, reflecting their ability to transfer power and work between the knee and ankle via tendon stretch and storage of elastic strain energy. Our findings add to the growing body of evidence surrounding the distinct mechanistic functions of uni- and biarticular muscles during dynamic movements. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  10. The immediate effect of long-distance running on T2 and T2* relaxation times of articular cartilage of the knee in young healthy adults at 3.0 T MR imaging

    PubMed Central

    Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc

    2016-01-01

    Objective: To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. Methods: 30 healthy male adults (18–31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7–154.6 ms] and T2* weighted (24 TEs: 4.6–53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product–moment correlation coefficients and confidence intervals were computed. Results: Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. Conclusion: A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. Advances in knowledge: This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults. PMID:27336705

  11. The immediate effect of long-distance running on T2 and T2* relaxation times of articular cartilage of the knee in young healthy adults at 3.0 T MR imaging.

    PubMed

    Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc

    2016-08-01

    To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.

  12. Chronic sciatic neuropathy in rat reduces voluntary wheel running activity with concurrent chronic mechanical allodynia

    PubMed Central

    Whitehead, RA; Lam, NL; Sun, MS; Sanchez, JJ; Noor, S; Vanderwall, AG; Petersen, TR; Martin, HB

    2016-01-01

    BACKGROUND Animal models of peripheral neuropathy produced by a number of manipulations are assessed for the presence of pathological pain states such as allodynia. While stimulus-induced behavioral assays are frequently used and important to examine allodynia (i.e. sensitivity to light mechanical touch; von Frey fiber test) other measures of behavior that reflect overall function are not only complementary to stimulus-induced responsive measures, but are also critical to gain a complete understanding of the effects of the pain model on quality of life, a clinically relevant aspect of pain on general function. Voluntary wheel running activity in rodent models of inflammatory and muscle pain is emerging as a reliable index of general function that extends beyond stimulus-induced behavioral assays. Clinically, reports of increased pain intensity occur at night, a period typically characterized with reduced activity during the diurnal cycle. We therefore examined in rats whether alterations in wheel running activity were more robust during the inactive phase compared to the active phase of their diurnal cycle in a widely used rodent model of chronic peripheral neuropathic pain, the sciatic nerve chronic constriction injury (CCI) model. METHODS In adult male Sprague Dawley rats, baseline (BL) hindpaw threshold responses to light mechanical touch were assessed using the von Frey test prior to measuring BL activity levels using freely accessible running wheels (1 hr/day for 7 sequential days) to quantify distance traveled. Running wheel activity BL values are expressed as total distance traveled (m). The overall experimental design was: following BL measures, rats underwent either sham or CCI surgery followed by repeated behavioral re-assessment of hindpaw thresholds and wheel running activity levels for up to 18 days after surgery. Specifically, separate groups of rats were assessed for wheel running activity levels (1 hr total/trial) during the onset (within first 2 hrs) of either the (1) inactive (n=8/gp) or (2) active (n = 8/gp) phase of the diurnal cycle. An additional group of CCI-treated rats (n = 8/gp) were exposed to a locked running wheel to control for the potential effects of wheel running exercise on allodynia. The 1-hr running wheel trial period was further examined at discrete 20-min intervals to identify possible pattern differences in activity during the first, middle and last portion of the 1-hr trial. The effect of neuropathy on activity levels were assessed by measuring the change from their respective BLs to distance traveled in the running wheels. RESULTS While wheel running distances between groups were not different at BL from rats examined during either the inactive phase of the diurnal cycle or active phase of the diurnal cycle, sciatic nerve CCI reduced running wheel activity levels compared to sham-operated controls during the inactive phase. Additionally, compared to sham controls, bilateral low threshold mechanical allodynia was observed at all time-points after surgical induction of neuropathy in rats with free-wheel and locked-wheel access. Allodynia in CCI compared to shams was replicated in rats whose running wheel activity was examined during the active phase of the diurnal cycle. Conversely, no significant reduction in wheel running activity was observed in CCI-treated rats compared to sham controls at any timepoint when activity levels were examined during the active diurnal phase. Lastly, running wheel activity patterns within the 1 hr trial period during the inactive phase of the diurnal cycle were relatively consistent throughout each 20 min phase. CONCLUSIONS Compared to non-neuropathic sham controls, a profound and stable reduction of running wheel activity was observed in CCI rats during the inactive phase of the diurnal cycle. A concurrent robust allodynia persisted in all rats regardless of when wheel running activity was examined or whether they ran on wheels, suggesting that acute wheel running activity does not alter chronic low intensity mechanical allodynia as measured using the von Frey fiber test. Overall, these data support that acute wheel running exercise with limited repeated exposures does not itself alter allodynia and offers a behavioral assay complementary to stimulus-induced measures of neuropathic pain. PMID:27782944

  13. Running speed during training and percent body fat predict race time in recreational male marathoners

    PubMed Central

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Background Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Methods Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. Results After multivariate regression, running speed of the training units (β = −0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) − 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. Conclusion The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners. PMID:24198587

  14. Passive (Micro-) Seismic Event Detection by Identifying Embedded "Event" Anomalies Within Statistically Describable Background Noise

    NASA Astrophysics Data System (ADS)

    Baziw, Erick; Verbeek, Gerald

    2012-12-01

    Among engineers there is considerable interest in the real-time identification of "events" within time series data with a low signal to noise ratio. This is especially true for acoustic emission analysis, which is utilized to assess the integrity and safety of many structures and is also applied in the field of passive seismic monitoring (PSM). Here an array of seismic receivers are used to acquire acoustic signals to monitor locations where seismic activity is expected: underground excavations, deep open pits and quarries, reservoirs into which fluids are injected or from which fluids are produced, permeable subsurface formations, or sites of large underground explosions. The most important element of PSM is event detection: the monitoring of seismic acoustic emissions is a continuous, real-time process which typically runs 24 h a day, 7 days a week, and therefore a PSM system with poor event detection can easily acquire terabytes of useless data as it does not identify crucial acoustic events. This paper outlines a new algorithm developed for this application, the so-called SEED™ (Signal Enhancement and Event Detection) algorithm. The SEED™ algorithm uses real-time Bayesian recursive estimation digital filtering techniques for PSM signal enhancement and event detection.

  15. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Firestone, Ryan; Marnay, Chris

    The on-site generation of electricity can offer buildingowners and occupiers financial benefits as well as social benefits suchas reduced grid congestion, improved energy efficiency, and reducedgreenhouse gas emissions. Combined heat and power (CHP), or cogeneration,systems make use of the waste heat from the generator for site heatingneeds. Real-time optimal dispatch of CHP systems is difficult todetermine because of complicated electricity tariffs and uncertainty inCHP equipment availability, energy prices, and system loads. Typically,CHP systems use simple heuristic control strategies. This paper describesa method of determining optimal control in real-time and applies it to alight industrial site in San Diego, California, tomore » examine: 1) the addedbenefit of optimal over heuristic controls, 2) the price elasticity ofthe system, and 3) the site-attributable greenhouse gas emissions, allunder three different tariff structures. Results suggest that heuristiccontrols are adequate under the current tariff structure and relativelyhigh electricity prices, capturing 97 percent of the value of thedistributed generation system. Even more value could be captured bysimply not running the CHP system during times of unusually high naturalgas prices. Under hypothetical real-time pricing of electricity,heuristic controls would capture only 70 percent of the value ofdistributed generation.« less

  17. Real-Time Interactive Tree Animation.

    PubMed

    Quigley, Ed; Yu, Yue; Huang, Jingwei; Lin, Winnie; Fedkiw, Ronald

    2018-05-01

    We present a novel method for posing and animating botanical tree models interactively in real time. Unlike other state of the art methods which tend to produce trees that are overly flexible, bending and deforming as if they were underwater plants, our approach allows for arbitrarily high stiffness while still maintaining real-time frame rates without spurious artifacts, even on quite large trees with over ten thousand branches. This is accomplished by using an articulated rigid body model with as-stiff-as-desired rotational springs in conjunction with our newly proposed simulation technique, which is motivated both by position based dynamics and the typical algorithms for articulated rigid bodies. The efficiency of our algorithm allows us to pose and animate trees with millions of branches or alternatively simulate a small forest comprised of many highly detailed trees. Even using only a single CPU core, we can simulate ten thousand branches in real time while still maintaining quite crisp user interactivity. This has allowed us to incorporate our framework into a commodity game engine to run interactively even on a low-budget tablet. We show that our method is amenable to the incorporation of a large variety of desirable effects such as wind, leaves, fictitious forces, collisions, fracture, etc.

  18. Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.

    PubMed

    Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C

    2017-03-01

    Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.

  19. The New Room Arrangement as a Teaching Strategy = La Nueva Organizacion del Salon como Estrategia Educativa.

    ERIC Educational Resources Information Center

    Dodge, Diane Trister

    Many typical classroom behavior problems--running in the classroom, inability to make choices, failure to stick with activities, fighting over toys, and poor use of materials-- can be traced to how the room is arranged and how materials are displayed. By making a few changes in the classroom environment, early childhood teachers can create a…

  20. Effects of Diet High in Palmitoleic Acid on Serum Lipid Levels and Metabolism

    DTIC Science & Technology

    2000-07-01

    cholesterol , high - density a typical American diet. lipoprotein cholesterol , and triglyceride ...group imbalance resulting from density lipoprotein ( HDL ) cholesterol , and triglyceride dropouts or exclusions during the run-in or early in the levels...Circulation 1997;95:69-75. 15. Austin MA, Rodriguez BL, McKnight B, JD Curb. Low- density lipoprotein (LDL) particle size and plasma triglyceride ( TG

  1. Mission Impossible? Reflecting upon the Relationship between Physical Education, Youth Sport and Lifelong Participation

    ERIC Educational Resources Information Center

    Green, Ken

    2014-01-01

    It is widely believed that school physical education (PE) is or, at the very least, can (even should) be a crucial vehicle for enhancing young people's engagement with physically active recreation (typically but not exclusively in the form of sport) in their leisure and, in the longer run, over the life-course. Despite the prevalence of such…

  2. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  3. Sensitivity to Morphosyntactic Information in 3-Year-Old Children with Typical Language Development: A Feasibility Study

    ERIC Educational Resources Information Center

    Deevy, Patricia; Leonard, Laurence B.; Marchman, Virginia A.

    2017-01-01

    Purpose: This study tested the feasibility of a method designed to assess children's sensitivity to tense/agreement information in fronted auxiliaries during online comprehension of questions (e.g., "Are the nice little dogs running?"). We expected that a group of children who were proficient in auxiliary use would show this sensitivity,…

  4. Actual situation analyses of rat-run traffic on community streets based on car probe data

    NASA Astrophysics Data System (ADS)

    Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao

    2017-10-01

    Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.

  5. Voluntary wheel running in dystrophin-deficient (mdx) mice: Relationships between exercise parameters and exacerbation of the dystrophic phenotype.

    PubMed

    Smythe, Gayle M; White, Jason D

    2011-12-18

    Voluntary wheel running can potentially be used to exacerbate the disease phenotype in dystrophin-deficient mdx mice. While it has been established that voluntary wheel running is highly variable between individuals, the key parameters of wheel running that impact the most on muscle pathology have not been examined in detail. We conducted a 2-week test of voluntary wheel running by mdx mice and the impact of wheel running on disease pathology. There was significant individual variation in the average daily distance (ranging from 0.003 ± 0.005 km to 4.48 ± 0.96 km), culminating in a wide range (0.040 km to 67.24 km) of total cumulative distances run by individuals. There was also variation in the number and length of run/rest cycles per night, and the average running rate. Correlation analyses demonstrated that in the quadriceps muscle, a low number of high distance run/rest cycles was the most consistent indicator for increased tissue damage. The amount of rest time between running bouts was a key factor associated with gastrocnemius damage. These data emphasize the need for detailed analysis of individual running performance, consideration of the length of wheel exposure time, and the selection of appropriate muscle groups for analysis, when applying the use of voluntary wheel running to disease exacerbation and/or pre-clinical testing of the efficacy of therapeutic agents in the mdx mouse.

  6. Changes in the Composition of the Fram Strait Freshwater Outflow

    NASA Astrophysics Data System (ADS)

    Dodd, Paul; Granskog, Mats; Fransson, Agneta; Chierici, Melissa; Stedmon, Colin

    2016-04-01

    Fram Strait is the largest gateway and only deep connection between the Arctic Ocean and the subpolar oceans. Monitoring the exchanges through Fram Strait allows us to detect and understand current changes occurring in the Arctic Ocean and to predict the effects of those changes on the Arctic and Subarctic climate and ecosystems. Polar water, recirculating Atlantic Water and deeper water masses exported from the Arctic Ocean through western Fram Strait are monitored year-round by an array of moored instruments along 78°50'N, continuously maintained by the Norwegian Polar Institute since the 1990s. Complimentary annual hydrographic sections have been repeated along the same latitude every September. This presentation will focus on biogeochemical tracer measurements collected along repeated sections from 1997-2015, which can be used to identify freshwater from different sources and reveal the causes of variations in total volume of freshwater exported e. g.: pulses of freshwater from the Pacific. Repeated tracer sections across Fram Strait reveal significant changes in the composition of the outflow in recent years, with recent sections showing positive fractions of sea ice meltwater at the surface near the core of the EGC, suggesting that more sea ice melts back into the surface than previously. The 1997-2015 time series of measurements reveals a strong anti-correlation between run-off and net sea ice meltwater inventories, suggesting that run-off and brine may be delivered to Fram Strait together from a common source. While the freshwater outflow at Fram Strait typically exhibits a similar run-off to net sea ice meltwater ratio to the central Arctic Ocean and Siberian shelves, we find that the ratio of run-off to sea ice meltwater at Fram Strait is decreasing with time, suggesting an increased surface input of sea ice meltwater in recent years. In 2014 and 2015 measurements of salinity, δ18O and total alkalinity were collected from sea ice cores as well as the underlying water column in Fram Strait. We use this dataset to investigate the feasibility of using concurrent δ18O and total alkalinity measurements to separately identify precipitation, which probably makes up a significant fraction of the freshwater in Fram Strait, but has so far not been separately monitored.

  7. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  8. N -loop running should be combined with N -loop matching

    NASA Astrophysics Data System (ADS)

    Braathen, Johannes; Goodsell, Mark D.; Krauss, Manuel E.; Opferkuch, Toby; Staub, Florian

    2018-01-01

    We investigate the high-scale behavior of Higgs sectors beyond the Standard Model, pointing out that the proper matching of the quartic couplings before applying the renormalization group equations (RGEs) is of crucial importance for reliable predictions at larger energy scales. In particular, the common practice of leading-order parameters in the RGE evolution is insufficient to make precise statements on a given model's UV behavior, typically resulting in uncertainties of many orders of magnitude. We argue that, before applying N -loop RGEs, a matching should even be performed at N -loop order in contrast to common lore. We show both analytical and numerical results where the impact is sizable for three minimal extensions of the Standard Model: a singlet extension, a second Higgs doublet and finally vector-like quarks. We highlight that the known two-loop RGEs tend to moderate the running of their one-loop counterparts, typically delaying the appearance of Landau poles. For the addition of vector-like quarks we show that the complete two-loop matching and RGE evolution hints at a stabilization of the electroweak vacuum at high energies, in contrast to results in the literature.

  9. Sex difference in top performers from Ironman to double deca iron ultra-triathlon

    PubMed Central

    Knechtle, Beat; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A

    2014-01-01

    This study investigated changes in performance and sex difference in top performers for ultra-triathlon races held between 1978 and 2013 from Ironman (3.8 km swim, 180 km cycle, and 42 km run) to double deca iron ultra-triathlon distance (76 km swim, 3,600 km cycle, and 844 km run). The fastest men ever were faster than the fastest women ever for split and overall race times, with the exception of the swimming split in the quintuple iron ultra-triathlon (19 km swim, 900 km cycle, and 210.1 km run). Correlation analyses showed an increase in sex difference with increasing length of race distance for swimming (r2=0.67, P=0.023), running (r2=0.77, P=0.009), and overall race time (r2=0.77, P=0.0087), but not for cycling (r2=0.26, P=0.23). For the annual top performers, split and overall race times decreased across years nonlinearly in female and male Ironman triathletes. For longer distances, cycling split times decreased linearly in male triple iron ultra-triathletes, and running split times decreased linearly in male double iron ultra-triathletes but increased linearly in female triple and quintuple iron ultra-triathletes. Overall race times increased nonlinearly in female triple and male quintuple iron ultra-triathletes. The sex difference decreased nonlinearly in swimming, running, and overall race time in Ironman triathletes but increased linearly in cycling and running and nonlinearly in overall race time in triple iron ultra-triathletes. These findings suggest that women reduced the sex difference nonlinearly in shorter ultra-triathlon distances (ie, Ironman), but for longer distances than the Ironman, the sex difference increased or remained unchanged across years. It seems very unlikely that female top performers will ever outrun male top performers in ultratriathlons. The nonlinear change in speed and sex difference in Ironman triathlon suggests that female and male Ironman triathletes have reached their limits in performance. PMID:25114605

  10. IGT-Open: An open-source, computerized version of the Iowa Gambling Task.

    PubMed

    Dancy, Christopher L; Ritter, Frank E

    2017-06-01

    The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture.

  11. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  12. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  13. Performance analysis of local area networks

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.; Hall, Mary Grace

    1990-01-01

    A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.

  14. Compression socks and functional recovery following marathon running: a randomized controlled trial.

    PubMed

    Armstrong, Stuart A; Till, Eloise S; Maloney, Stephen R; Harris, Gregory A

    2015-02-01

    Compression socks have become a popular recovery aid for distance running athletes. Although some physiological markers have been shown to be influenced by wearing these garments, scant evidence exists on their effects on functional recovery. This research aims to shed light onto whether the wearing of compression socks for 48 hours after marathon running can improve functional recovery, as measured by a timed treadmill test to exhaustion 14 days following marathon running. Athletes (n = 33, age, 38.5 ± 7.2 years) participating in the 2012 Melbourne, 2013 Canberra, or 2013 Gold Coast marathons were recruited and randomized into the compression sock or placebo group. A graded treadmill test to exhaustion was performed 2 weeks before and 2 weeks after each marathon. Time to exhaustion, average and maximum heart rates were recorded. Participants were asked to wear their socks for 48 hours immediately after completion of the marathon. The change in treadmill times (seconds) was recorded for each participant. Thirty-three participants completed the treadmill protocols. In the compression group, average treadmill run to exhaustion time 2 weeks after the marathon increased by 2.6% (52 ± 103 seconds). In the placebo group, run to exhaustion time decreased by 3.4% (-62 ± 130 seconds), P = 0.009. This shows a significant beneficial effect of compression socks on recovery compared with placebo. The wearing of below-knee compression socks for 48 hours after marathon running has been shown to improve functional recovery as measured by a graduated treadmill test to exhaustion 2 weeks after the event.

  15. Prediction of half-marathon race time in recreational female and male runners.

    PubMed

    Knechtle, Beat; Barandun, Ursula; Knechtle, Patrizia; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A

    2014-01-01

    Half-marathon running is of high popularity. Recent studies tried to find predictor variables for half-marathon race time for recreational female and male runners and to present equations to predict race time. The actual equations included running speed during training for both women and men as training variable but midaxillary skinfold for women and body mass index for men as anthropometric variable. An actual study found that percent body fat and running speed during training sessions were the best predictor variables for half-marathon race times in both women and men. The aim of the present study was to improve the existing equations to predict half-marathon race time in a larger sample of male and female half-marathoners by using percent body fat and running speed during training sessions as predictor variables. In a sample of 147 men and 83 women, multiple linear regression analysis including percent body fat and running speed during training units as independent variables and race time as dependent variable were performed and an equation was evolved to predict half-marathon race time. For men, half-marathon race time might be predicted by the equation (r(2) = 0.42, adjusted r(2) = 0.41, SE = 13.3) half-marathon race time (min) = 142.7 + 1.158 × percent body fat (%) - 5.223 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.71, p < 0.0001) to the achieved race time. For women, half-marathon race time might be predicted by the equation (r(2) = 0.68, adjusted r(2) = 0.68, SE = 9.8) race time (min) = 168.7 + 1.077 × percent body fat (%) - 7.556 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.89, p < 0.0001) to the achieved race time. The coefficients of determination of the models were slightly higher than for the existing equations. Future studies might include physiological variables to increase the coefficients of determination of the models.

  16. The evolution of micro-cursoriality in mammals.

    PubMed

    Lovegrove, Barry G; Mowoe, Metobor O

    2014-04-15

    In this study we report on the evolution of micro-cursoriality, a unique case of cursoriality in mammals smaller than 1 kg. We obtained new running speed and limb morphology data for two species of elephant-shrews (Elephantulus spp., Macroscelidae) from Namaqualand, South Africa, which we compared with published data for other mammals. Elephantulus maximum running speeds were higher than those of most mammals smaller than 1 kg. Elephantulus also possess exceptionally high metatarsal:femur ratios (1.07) that are typically associated with fast unguligrade cursors. Cursoriality evolved in the Artiodactyla, Perissodactyla and Carnivora coincident with global cooling and the replacement of forests with open landscapes in the Oligocene and Miocene. The majority of mammal species, though, remained non-cursorial, plantigrade and small (<1 kg). The extraordinary running speed and digitigrady of elephant-shrews was established in the Early Eocene in the earliest macroscelid Prodiacodon, but was probably inherited from Paleocene, Holarctic stem macroscelids. Micro-cursoriality in macroscelids evolved from the plesiomorphic plantigrade foot of the possum-like ancestral mammal earlier than in other mammalian crown groups. Micro-cursoriality evolved first in forests, presumably in response to selection for rapid running speeds facilitated by local knowledge, in order to avoid predators. During the Miocene, micro-cursoriality was pre-adaptive to open, arid habitats, and became more derived in the newly evolved Elephantulus and Macroscelides elephant-shrews with trail running.

  17. Natural Whisker-Guided Behavior by Head-Fixed Mice in Tactile Virtual Reality

    PubMed Central

    Sofroniew, Nicholas J.; Cohen, Jeremy D.; Lee, Albert K.

    2014-01-01

    During many natural behaviors the relevant sensory stimuli and motor outputs are difficult to quantify. Furthermore, the high dimensionality of the space of possible stimuli and movements compounds the problem of experimental control. Head fixation facilitates stimulus control and movement tracking, and can be combined with techniques for recording and manipulating neural activity. However, head-fixed mouse behaviors are typically trained through extensive instrumental conditioning. Here we present a whisker-based, tactile virtual reality system for head-fixed mice running on a spherical treadmill. Head-fixed mice displayed natural movements, including running and rhythmic whisking at 16 Hz. Whisking was centered on a set point that changed in concert with running so that more protracted whisking was correlated with faster running. During turning, whiskers moved in an asymmetric manner, with more retracted whisker positions in the turn direction and protracted whisker movements on the other side. Under some conditions, whisker movements were phase-coupled to strides. We simulated a virtual reality tactile corridor, consisting of two moveable walls controlled in a closed-loop by running speed and direction. Mice used their whiskers to track the walls of the winding corridor without training. Whisker curvature changes, which cause forces in the sensory follicles at the base of the whiskers, were tightly coupled to distance from the walls. Our behavioral system allows for precise control of sensorimotor variables during natural tactile navigation. PMID:25031397

  18. Discrepancy analysis of driving performance of taxi drivers and non-professional drivers for red-light running violation and crash avoidance at intersections.

    PubMed

    Wu, Jiawei; Yan, Xuedong; Radwan, Essam

    2016-06-01

    Due to comfort, convenience, and flexibility, taxis have become increasingly more prevalent in China, especially in large cities. However, many violations and road crashes that occurred frequently were related to taxi drivers. This study aimed to investigate differences in driving performance between taxi drivers and non-professional drivers from the perspectives of red-light running violation and potential crash involvement based on a driving simulation experiment. Two typical scenarios were established in a driving simulator, which includes the red-light running violation scenario and the crash avoidance scenario. There were 49 participants, including 23 taxi drivers (14 males and 9 females) and 26 non-professional drivers (13 males and 13 females) recruited for this experiment. The driving simulation experiment results indicated that non-professional drivers paid more attention to red-light running violations in comparison to taxi drivers who had a higher probability of red-light running violation. Furthermore, it was found that taxi drivers were more inclined to turn the steering wheel in an attempt to avoid a potential collision and non-professional drivers had more abrupt deceleration behaviors when facing a potential crash. Moreover, the experiment results showed that taxi drivers had a smaller crash rate compared to non-professional drivers and had a better performance in terms of crash avoidance at the intersection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Simulated interactions of pedestrian crossings and motorized vehicles in residential areas

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Peng, Zhongyi; Chen, Qun

    2018-01-01

    To evaluate whether motorized vehicles can travel through a residential area, this paper develops a cellular automata (CA) model to simulate the interactions between pedestrian crossings and motorized vehicles in a residential area. In this paper, pedestrians determine their crossing speed according to their judgments of the position and velocity of the upcoming vehicles. The pedestrians may walk slowly or quickly or even run, and the pedestrian crossing time influences the vehicle movement. In addition, the proposed model considers the safety margin time needed for pedestrians to cross, and pedestrian-vehicle conflict is considered using the vehicle collision avoidance rule. Through simulations of interactions of pedestrian crossings with motorized vehicles' movement on a typical road in a residential area, the average wait time for pedestrians to cross and the average vehicle velocity under different pedestrian crossing volumes, different vehicle flows and different maximum vehicle velocities are obtained. To avoid an excessive waiting time for pedestrians to cross, the vehicle flow should be less than 180 veh/h, which allows an average of less than 10 s of waiting time; if the vehicle flow rate is less than 36 veh/h, then the waiting time is approximately 1 s. Field observations are conducted to validate the simulation results.

  20. Design of ProjectRun21: a 14-week prospective cohort study of the influence of running experience and running pace on running-related injury in half-marathoners.

    PubMed

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard

    2017-11-06

    Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.

Top