Sample records for run multiple times

  1. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  2. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    PubMed

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  3. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    NASA Technical Reports Server (NTRS)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  4. Scaling NS-3 DCE Experiments on Multi-Core Servers

    DTIC Science & Technology

    2016-06-15

    that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on

  5. DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.

    PubMed

    Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard

    2004-09-09

    Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  6. Effect of Fiber Orientation on Dynamic Compressive Properties of an Ultra-High Performance Concrete

    DTIC Science & Technology

    2017-08-01

    measurements for LSFfiberOrient function for multiple cores. Elapsed time is the total time taken to run ; CPU time is the number of cores times the...Superscripts Maximum value during a test Measured value from a calibration run ...movement left or right. Before cutting, the Cor-Tuf Baseline beam was placed on the table and squared with the blade . The blade was then moved into

  7. Multiresource allocation and scheduling for periodic soft real-time applications

    NASA Astrophysics Data System (ADS)

    Gopalan, Kartik; Chiueh, Tzi-cker

    2001-12-01

    Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.

  8. Real-time optical multiple object recognition and tracking system and method

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)

    1987-01-01

    The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.

  9. Measuring joint kinematics of treadmill walking and running: Comparison between an inertial sensor based system and a camera-based system.

    PubMed

    Nüesch, Corina; Roos, Elena; Pagenstert, Geert; Mündermann, Annegret

    2017-05-24

    Inertial sensor systems are becoming increasingly popular for gait analysis because their use is simple and time efficient. This study aimed to compare joint kinematics measured by the inertial sensor system RehaGait® with those of an optoelectronic system (Vicon®) for treadmill walking and running. Additionally, the test re-test repeatability of kinematic waveforms and discrete parameters for the RehaGait® was investigated. Twenty healthy runners participated in this study. Inertial sensors and reflective markers (PlugIn Gait) were attached according to respective guidelines. The two systems were started manually at the same time. Twenty consecutive strides for walking and running were recorded and each software calculated sagittal plane ankle, knee and hip kinematics. Measurements were repeated after 20min. Ensemble means were analyzed calculating coefficients of multiple correlation for waveforms and root mean square errors (RMSE) for waveforms and discrete parameters. After correcting the offset between waveforms, the two systems/models showed good agreement with coefficients of multiple correlation above 0.950 for walking and running. RMSE of the waveforms were below 5° for walking and below 8° for running. RMSE for ranges of motion were between 4° and 9° for walking and running. Repeatability analysis of waveforms showed very good to excellent coefficients of multiple correlation (>0.937) and RMSE of 3° for walking and 3-7° for running. These results indicate that in healthy subjects sagittal plane joint kinematics measured with the RehaGait® are comparable to those using a Vicon® system/model and that the measured kinematics have a good repeatability, especially for walking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.

    PubMed

    Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C

    2017-03-01

    Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.

  11. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  12. Multiple Counseling in Open and Closed Time-Extended Groups.

    ERIC Educational Resources Information Center

    Chambers, W. M.

    The open time-extended group, run by multiple counselors, adds a facilitating dimension to the counseling function--a dimension that exemplifies the concepts of self-growth and self-actualization by first providing the atmosphere for the client and then by allowing him to progress at his own rate and to a depth which he determines. An open group…

  13. 2017 ARL Summer Student Program Volume 2: Compendium of Abstracts

    DTIC Science & Technology

    2017-12-01

    useful for equipping quadrotors with advanced capabilities, such as running deep learning networks. A second purpose of this project is to quantify the...Multiple samples were run in the LEAP 5000-XR generating large data sets (hundreds of millions of ions composing hundreds of cubic nanometers of...produce viable walking and running gaits on the final product. Even further, the monetary and time cost of this increases significantly when working

  14. Effect of sucrose availability on wheel-running as an operant and as a reinforcing consequence on a multiple schedule: Additive effects of extrinsic and automatic reinforcement.

    PubMed

    Belke, Terry W; Pierce, W David

    2015-07-01

    As a follow up to Belke and Pierce's (2014) study, we assessed the effects of repeated presentation and removal of sucrose solution on the behavior of rats responding on a two-component multiple schedule. Rats completed 15 wheel turns (FR 15) for either 15% or 0% sucrose solution in the manipulated component and lever pressed 10 times on average (VR 10) for an opportunity to complete 15 wheel turns (FR 15) in the other component. In contrast to our earlier study, the components advanced based on time (every 8min) rather than completed responses. Results showed that in the manipulated component wheel-running rates were higher and the latency to initiate running longer when sucrose was present (15%) compared to absent (0% or water); the number of obtained outcomes (sucrose/water), however, did not differ with the presentation and withdrawal of sucrose. For the wheel-running as reinforcement component, rates of wheel turns, overall lever-pressing rates, and obtained wheel-running reinforcements were higher, and postreinforcement pauses shorter, when sucrose was present (15%) than absent (0%) in manipulated component. Overall, our findings suggest that wheel-running rate regardless of its function (operant or reinforcement) is maintained by automatically generated consequences (automatic reinforcement) and is increased as an operant by adding experimentally arranged sucrose reinforcement (extrinsic reinforcement). This additive effect on operant wheel-running generalizes through induction or arousal to the wheel-running as reinforcement component, increasing the rate of responding for opportunities to run and the rate of wheel-running per opportunity. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Forecasting the Relative and Cumulative Effects of Multiple Stressors on At-risk Populations

    DTIC Science & Technology

    2011-08-01

    Vitals (observed vital rates), Movement, Ranges, Barriers (barrier interactions), Stochasticity (a time series of stochasticity indices...Simulation Viewer are themselves stochastic . They can change each time it is run. B. 196 Analysis If multiple Census events are present in the life...30-year period. A monthly time series was generated for the 20th-century using monthly anomalies for temperature, precipitation, and percent

  16. A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall

    NASA Astrophysics Data System (ADS)

    Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian

    2018-02-01

    Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.

  17. Relationship between physical activity, physical fitness and multiple metabolic risk in youths from Muzambinho's study.

    PubMed

    Barbosa, João Paulo Dos Anjos Souza; Basso, Luciano; Seabra, André; Prista, Antonio; Tani, Go; Maia, José António Ribeiro; Forjaz, Cláudia Lúcia De Moraes

    2016-08-01

    Negative associations between physical activity (PA), physical fitness and multiple metabolic risk factors (MMRF) in youths from populations with low PA are reported. The persistence of this association in moderately-to highly active populations is not, however, well established. The aim of the present study was to investigate this association in a Brazilian city with high frequency of active youths. We assessed 122 subjects (9.9 ± 1.3 years) from Muzambinho city. Body mass index, waist circumference, glycaemia, cholesterolaemia, systolic and diastolic blood pressures were measured. Maximal handgrip strength and one-mile walk/run test were used. Leisure time PA was assessed by interview. Poisson regression was used in the analysis. The model explained 11% of the total variance. Only relative muscular strength and one-mile walk/run were statistically significant (p < .05). Those who needed more time to cover the one-mile walk/run test had an increased in metabolic risk of 11%, and those with greater strength reduced the risk by about 82%. In conclusion, children and youths from an active population who need less time to cover the one-mile walk/run test or who had greater muscular strength showed a reduced metabolic risk. These results suggest that even in children and youths with high leisure time PA, a greater aerobic fitness and strength might help to further reduce their MMRF.

  18. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. Results In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Conclusion Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA. PMID:24564200

  19. Fast discovery and visualization of conserved regions in DNA sequences using quasi-alignment.

    PubMed

    Nagar, Anurag; Hahsler, Michael

    2013-01-01

    Next Generation Sequencing techniques are producing enormous amounts of biological sequence data and analysis becomes a major computational problem. Currently, most analysis, especially the identification of conserved regions, relies heavily on Multiple Sequence Alignment and its various heuristics such as progressive alignment, whose run time grows with the square of the number and the length of the aligned sequences and requires significant computational resources. In this work, we present a method to efficiently discover regions of high similarity across multiple sequences without performing expensive sequence alignment. The method is based on approximating edit distance between segments of sequences using p-mer frequency counts. Then, efficient high-throughput data stream clustering is used to group highly similar segments into so called quasi-alignments. Quasi-alignments have numerous applications such as identifying species and their taxonomic class from sequences, comparing sequences for similarities, and, as in this paper, discovering conserved regions across related sequences. In this paper, we show that quasi-alignments can be used to discover highly similar segments across multiple sequences from related or different genomes efficiently and accurately. Experiments on a large number of unaligned 16S rRNA sequences obtained from the Greengenes database show that the method is able to identify conserved regions which agree with known hypervariable regions in 16S rRNA. Furthermore, the experiments show that the proposed method scales well for large data sets with a run time that grows only linearly with the number and length of sequences, whereas for existing multiple sequence alignment heuristics the run time grows super-linearly. Quasi-alignment-based algorithms can detect highly similar regions and conserved areas across multiple sequences. Since the run time is linear and the sequences are converted into a compact clustering model, we are able to identify conserved regions fast or even interactively using a standard PC. Our method has many potential applications such as finding characteristic signature sequences for families of organisms and studying conserved and variable regions in, for example, 16S rRNA.

  20. Scheduling time-critical graphics on multiple processors

    NASA Technical Reports Server (NTRS)

    Meyer, Tom W.; Hughes, John F.

    1995-01-01

    This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.

  1. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  2. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  3. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    ERIC Educational Resources Information Center

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  4. A faster technique for rendering meshes in multiple display systems

    NASA Astrophysics Data System (ADS)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  5. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  6. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  7. Development and testing of a new system for assessing wheel-running behaviour in rodents.

    PubMed

    Chomiak, Taylor; Block, Edward W; Brown, Andrew R; Teskey, G Campbell; Hu, Bin

    2016-05-05

    Wheel running is one of the most widely studied behaviours in laboratory rodents. As a result, improved approaches for the objective monitoring and gathering of more detailed information is increasingly becoming important for evaluating rodent wheel-running behaviour. Here our aim was to develop a new quantitative wheel-running system that can be used for most typical wheel-running experimental protocols. Here we devise a system that can provide a continuous waveform amenable to real-time integration with a high-speed video ideal for wheel-running experimental protocols. While quantification of wheel running behaviour has typically focused on the number of revolutions per unit time as an end point measure, the approach described here allows for more detailed information like wheel rotation fluidity, directionality, instantaneous velocity, and acceleration, in addition to total number of rotations, and the temporal pattern of wheel-running behaviour to be derived from a single trace. We further tested this system with a running-wheel behavioural paradigm that can be used for investigating the neuronal mechanisms of procedural learning and postural stability, and discuss other potentially useful applications. This system and its ability to evaluate multiple wheel-running parameters may become a useful tool for screening new potentially important therapeutic compounds related to many neurological conditions.

  8. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  9. Personal best marathon time and longest training run, not anthropometry, predict performance in recreational 24-hour ultrarunners.

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald

    2011-08-01

    In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.

  10. Lower-body determinants of running economy in male and female distance runners.

    PubMed

    Barnes, Kyle R; Mcguigan, Michael R; Kilding, Andrew E

    2014-05-01

    A variety of training approaches have been shown to improve running economy in well-trained athletes. However, there is a paucity of data exploring lower-body determinants that may affect running economy and account for differences that may exist between genders. Sixty-three male and female distance runners were assessed in the laboratory for a range of metabolic, biomechanical, and neuromuscular measures potentially related to running economy (ml·kg(-1)·min(-1)) at a range of running speeds. At all common test velocities, women were more economical than men (effect size [ES] = 0.40); however, when compared in terms of relative intensity, men had better running economy (ES = 2.41). Leg stiffness (r = -0.80) and moment arm length (r = 0.90) were large-extremely largely correlated with running economy and each other (r = -0.82). Correlations between running economy and kinetic measures (peak force, peak power, and time to peak force) for both genders were unclear. The relationship in stride rate (r = -0.27 to -0.31) was in the opposite direction to that of stride length (r = 0.32-0.49), and the relationship in contact time (r = -0.21 to -0.54) was opposite of that of flight time (r = 0.06-0.74). Although both leg stiffness and moment arm length are highly related to running economy, it seems that no single lower-body measure can completely explain differences in running economy between individuals or genders. Running economy is therefore likely determined from the sum of influences from multiple lower-body attributes.

  11. PVIScreen

    EPA Pesticide Factsheets

    PVIScreen extends the concepts of a prior model (BioVapor), which accounted for oxygen-driven biodegradation of multiple constituents of petroleum in the soil above the water table. Typically, the model is run 1000 times using various factors.

  12. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  13. Monitoring of the data processing and simulated production at CMS with a web-based service: the Production Monitoring Platform (pMp)

    NASA Astrophysics Data System (ADS)

    Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.

    2017-10-01

    Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.

  14. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  15. During running in place, grid cells integrate elapsed time and distance run

    PubMed Central

    Kraus, Benjamin J.; Brandon, Mark P.; Robinson, Robert J.; Connerney, Michael A.; Hasselmo, Michael E.; Eichenbaum, Howard

    2015-01-01

    Summary The spatial scale of grid cells may be provided by self-generated motion information or by external sensory information from environmental cues. To determine whether grid cell activity reflects distance traveled or elapsed time independent of external information, we recorded grid cells as animals ran in place on a treadmill. Grid cell activity was only weakly influenced by location but most grid cells and other neurons recorded from the same electrodes strongly signaled a combination of distance and time, with some signaling only distance or time. Grid cells were more sharply tuned to time and distance than non-grid cells. Many grid cells exhibited multiple firing fields during treadmill running, parallel to the periodic firing fields observed in open fields, suggesting a common mode of information processing. These observations indicate that, in the absence of external dynamic cues, grid cells integrate self-generated distance and time information to encode a representation of experience. PMID:26539893

  16. Reduced SWAP-C VICTORY Services Execution and Performance Evaluation

    DTIC Science & Technology

    2012-08-01

    NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) UBT, Inc.,3250 W Big Beaver Rd, Suite 329, Troy ,Mi,48084 8. PERFORMING...Symposium August 14-16 Troy , Michigan 14. ABSTRACT -Executing multiple VICTORY data services, and reading multiple VICTORY-compliant sensors at the...same time resulted in the following performance measurements for the system -0.64 Amps / 3.15 Watts Power Consumption at run-time. -Roughly 0.77% System

  17. Economic optimization software applied to JFK airport heating and cooling plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gay, R.R.; McCoy, L.

    This paper describes the on-line economic optimization routine developed by Enter Software, Inc. for application at the heating and cooling plant for the JFK International Airport near New York City. The objective of the economic optimization is to find the optimum plant configuration (which gas turbines to run, power levels of each gas turbine, duct firing levels, which auxiliary water heaters to run, which electric chillers to run, and which absorption chillers to run) which produces maximum net income at the plant as plant loads and the prices vary. The routines also include a planner which runs a series ofmore » optimizations over multiple plant configurations to simulate the varying plant operating conditions for the purpose of predicting the overall plant results over a period of time.« less

  18. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  19. What is associated with race performance in male 100-km ultra-marathoners--anthropometry, training or marathon best time?

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Senn, Oliver

    2011-03-01

    We investigated the associations of anthropometry, training, and pre-race experience with race time in 93 recreational male ultra-marathoners (mean age 44.6 years, s = 10.0; body mass 74.0 kg, s = 9.0; height 1.77 m, s = 0.06; body mass index 23.4 kg · m(-2), s = 2.0) in a 100-km ultra-marathon using bivariate and multivariate analysis. In the bivariate analysis, body mass index (r = 0.24), the sum of eight skinfolds (r = 0.55), percent body fat (r = 0.57), weekly running hours (r = -0.29), weekly running kilometres (r = -0.49), running speed during training (r = -0.50), and personal best time in a marathon (r = 0.72) were associated with race time. Results of the multiple regression analysis revealed an independent and negative association of weekly running kilometres and average speed in training with race time, as well as a significant positive association between the sum of eight skinfold thicknesses and race time. There was a significant positive association between 100-km race time and personal best time in a marathon. We conclude that both training and anthropometry were independently associated with race performance. These characteristics remained relevant even when controlling for personal best time in a marathon.

  20. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  1. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  2. Comparison of the phenolic composition of fruit juices by single step gradient HPLC analysis of multiple components versus multiple chromatographic runs optimised for individual families.

    PubMed

    Bremner, P D; Blacklock, C J; Paganga, G; Mullen, W; Rice-Evans, C A; Crozier, A

    2000-06-01

    After minimal sample preparation, two different HPLC methodologies, one based on a single gradient reversed-phase HPLC step, the other on multiple HPLC runs each optimised for specific components, were used to investigate the composition of flavonoids and phenolic acids in apple and tomato juices. The principal components in apple juice were identified as chlorogenic acid, phloridzin, caffeic acid and p-coumaric acid. Tomato juice was found to contain chlorogenic acid, caffeic acid, p-coumaric acid, naringenin and rutin. The quantitative estimates of the levels of these compounds, obtained with the two HPLC procedures, were very similar, demonstrating that either method can be used to analyse accurately the phenolic components of apple and tomato juices. Chlorogenic acid in tomato juice was the only component not fully resolved in the single run study and the multiple run analysis prior to enzyme treatment. The single run system of analysis is recommended for the initial investigation of plant phenolics and the multiple run approach for analyses where chromatographic resolution requires improvement.

  3. Master of Puppets: Cooperative Multitasking for In Situ Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Lukic, Zarija

    2016-01-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less

  4. Henson v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monozov, Dmitriy; Lukie, Zarija

    2016-04-01

    Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less

  5. A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes

    PubMed Central

    Ma, Xin; Shen, Jianping

    2017-01-01

    The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094

  6. Urban Land: Study of Surface Run-off Composition and Its Dynamics

    NASA Astrophysics Data System (ADS)

    Palagin, E. D.; Gridneva, M. A.; Bykova, P. G.

    2017-11-01

    The qualitative composition of urban land surface run-off is liable to significant variations. To study surface run-off dynamics, to examine its behaviour and to discover reasons of these variations, it is relevant to use the mathematical apparatus technique of time series analysis. A seasonal decomposition procedure was applied to a temporary series of monthly dynamics with the annual frequency of seasonal variations in connection with a multiplicative model. The results of the quantitative chemical analysis of surface wastewater of the 22nd Partsjezd outlet in Samara for the period of 2004-2016 were used as basic data. As a result of the analysis, a seasonal pattern of variations in the composition of surface run-off in Samara was identified. Seasonal indices upon 15 waste-water quality indicators were defined. BOD (full), suspended materials, mineralization, chlorides, sulphates, ammonium-ion, nitrite-anion, nitrate-anion, phosphates (phosphorus), iron general, copper, zinc, aluminium, petroleum products, synthetic surfactants (anion-active). Based on the seasonal decomposition of the time series data, the contribution of trends, seasonal and accidental components of the variability of the surface run-off indicators was estimated.

  7. Consistency of internal fluxes in a hydrological model running at multiple time steps

    NASA Astrophysics Data System (ADS)

    Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-04-01

    Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7

  8. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  9. Large area silicon sheet by EFG

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A multiple growth run with three 10 cm cartridges was carried out with the best throughput rates and time percentage of simultaneous three ribbon growth achieved to date in this system. Growth speeds were between 3.2 and 3.6 cm/minute on all three cartridges and simultaneous full width growth of three ribbons was achieved 47 percent of the time over the eight hour duration of the experiment. Improvements in instrumentation and in the main zone temperature uniformity were two factors that have led to more reproducible growth conditions in the multiple ribbon furnace.

  10. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  11. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  12. Strategies for Maximizing Successful Drug Substance Technology Transfer Using Engineering, Shake-Down, and Wet Test Runs.

    PubMed

    Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina

    2015-01-01

    The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.

  13. Influence of an injury reduction program on injury and fitness outcomes among soldiers

    PubMed Central

    Knapik, J; Bullock, S; Canada, S; Toney, E; Wells, J; Hoedebecke, E; Jones, B

    2004-01-01

    Objective: This study evaluated the influence of a multiple injury control intervention on injury and physical fitness outcomes among soldiers attending United States Army Ordnance School Advanced Individual Training. Methods: The study design was quasiexperimental involving a historical control group (n = 2559) that was compared to a multiple intervention group (n = 1283). Interventions in the multiple intervention group included modified physical training, injury education, and a unit based injury surveillance system (UBISS). The management responsible for training independently formed an Injury Control Advisory Committee that examined surveillance reports from the UBISS and recommended changes to training. On arrival at school, individual soldiers completed a demographics and lifestyle questionnaire and took an army physical fitness test (APFT: push-ups, sit-ups, and two mile run). Injuries among soldiers were tracked by a clinic based injury surveillance system that was separate from the UBISS. Soldiers completed a final APFT eight weeks after arrival at school. Results: Cox regression (survival analysis) was used to examine differences in time to the first injury while controlling for group differences in demographics, lifestyle characteristics, and physical fitness. The adjusted relative risk of a time loss injury was 1.5 (95% confidence interval 1.2 to 1.8) times higher in the historical control men and 1.8 (95% confidence interval 1.1 to 2.8) times higher in the historical control women compared with the multiple intervention men and women, respectively. After correcting for the lower initial fitness of the multiple intervention group, there were no significant differences between the multiple intervention and historical control groups in terms of improvements in push-ups, sit-ups, or two mile run performance. Conclusions: This multiple intervention program contributed to a reduction in injuries while improvements in physical fitness were similar to a traditional physical training program previously used at the school. PMID:14760025

  14. AlexSys: a knowledge-based expert system for multiple sequence alignment construction and analysis

    PubMed Central

    Aniba, Mohamed Radhouene; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn

    2010-01-01

    Multiple sequence alignment (MSA) is a cornerstone of modern molecular biology and represents a unique means of investigating the patterns of conservation and diversity in complex biological systems. Many different algorithms have been developed to construct MSAs, but previous studies have shown that no single aligner consistently outperforms the rest. This has led to the development of a number of ‘meta-methods’ that systematically run several aligners and merge the output into one single solution. Although these methods generally produce more accurate alignments, they are inefficient because all the aligners need to be run first and the choice of the best solution is made a posteriori. Here, we describe the development of a new expert system, AlexSys, for the multiple alignment of protein sequences. AlexSys incorporates an intelligent inference engine to automatically select an appropriate aligner a priori, depending only on the nature of the input sequences. The inference engine was trained on a large set of reference multiple alignments, using a novel machine learning approach. Applying AlexSys to a test set of 178 alignments, we show that the expert system represents a good compromise between alignment quality and running time, making it suitable for high throughput projects. AlexSys is freely available from http://alnitak.u-strasbg.fr/∼aniba/alexsys. PMID:20530533

  15. Walking, running and the evolution of short toes in humans.

    PubMed

    Rolian, Campbell; Lieberman, Daniel E; Hamill, Joseph; Scott, John W; Werbel, William

    2009-03-01

    The phalangeal portion of the forefoot is extremely short relative to body mass in humans. This derived pedal proportion is thought to have evolved in the context of committed bipedalism, but the benefits of shorter toes for walking and/or running have not been tested previously. Here, we propose a biomechanical model of toe function in bipedal locomotion that suggests that shorter pedal phalanges improve locomotor performance by decreasing digital flexor force production and mechanical work, which might ultimately reduce the metabolic cost of flexor force production during bipedal locomotion. We tested this model using kinematic, force and plantar pressure data collected from a human sample representing normal variation in toe length (N=25). The effect of toe length on peak digital flexor forces, impulses and work outputs was evaluated during barefoot walking and running using partial correlations and multiple regression analysis, controlling for the effects of body mass, whole-foot and phalangeal contact times and toe-out angle. Our results suggest that there is no significant increase in digital flexor output associated with longer toes in walking. In running, however, multiple regression analyses based on the sample suggest that increasing average relative toe length by as little as 20% doubles peak digital flexor impulses and mechanical work, probably also increasing the metabolic cost of generating these forces. The increased mechanical cost associated with long toes in running suggests that modern human forefoot proportions might have been selected for in the context of the evolution of endurance running.

  16. GRAPEVINE: Grids about anything by Poisson's equation in a visually interactive networking environment

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.; Mccann, Karen

    1992-01-01

    A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.

  17. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  18. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Astrophysics Data System (ADS)

    Watmuff, Jonathan H.

    1992-10-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  19. A fundamental study of suction for Laminar Flow Control (LFC)

    NASA Technical Reports Server (NTRS)

    Watmuff, Jonathan H.

    1992-01-01

    This report covers the period forming the first year of the project. The aim is to experimentally investigate the effects of suction as a technique for Laminar Flow Control. Experiments are to be performed which require substantial modifications to be made to the experimental facility. Considerable effort has been spent developing new high performance constant temperature hot-wire anemometers for general purpose use in the Fluid Mechanics Laboratory. Twenty instruments have been delivered. An important feature of the facility is that it is totally automated under computer control. Unprecedently large quantities of data can be acquired and the results examined using the visualization tools developed specifically for studying the results of numerical simulations on graphics works stations. The experiment must be run for periods of up to a month at a time since the data is collected on a point-by-point basis. Several techniques were implemented to reduce the experimental run-time by a significant factor. Extra probes have been constructed and modifications have been made to the traverse hardware and to the real-time experimental code to enable multiple probes to be used. This will reduce the experimental run-time by the appropriate factor. Hot-wire calibration drift has been a frustrating problem owing to the large range of ambient temperatures experienced in the laboratory. The solution has been to repeat the calibrations at frequent intervals. However the calibration process has consumed up to 40 percent of the run-time. A new method of correcting the drift is very nearly finalized and when implemented it will also lead to a significant reduction in the experimental run-time.

  20. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  1. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Hydropower plant (HPP) generation comprises a considerable portion of bulk electricity generation and is delivered with a low-carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which include wind and solar. Increasing penetration levels of wind and solar lead to a lower inertia on the electric grid, which poses stability challenges. In recent years, breakthroughs in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments of renewable energy resources on electric grids. If integrated with scalable, multi-time-step energy storage so that the total output can be controlled, multiple run-of-the-river (ROR)more » HPPs can be deployed. Although the size of a single energy storage system is much smaller than that of a typical reservoir, the ratings of storages and multiple ROR HPPs approximately equal the rating of a large, conventional HPP. This paper proposes cohesively managing multiple sets of energy storage systems distributed in different locations. This paper also describes the challenges associated with ROR HPP system architecture and operation.« less

  2. Prediction of half-marathon race time in recreational female and male runners.

    PubMed

    Knechtle, Beat; Barandun, Ursula; Knechtle, Patrizia; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A

    2014-01-01

    Half-marathon running is of high popularity. Recent studies tried to find predictor variables for half-marathon race time for recreational female and male runners and to present equations to predict race time. The actual equations included running speed during training for both women and men as training variable but midaxillary skinfold for women and body mass index for men as anthropometric variable. An actual study found that percent body fat and running speed during training sessions were the best predictor variables for half-marathon race times in both women and men. The aim of the present study was to improve the existing equations to predict half-marathon race time in a larger sample of male and female half-marathoners by using percent body fat and running speed during training sessions as predictor variables. In a sample of 147 men and 83 women, multiple linear regression analysis including percent body fat and running speed during training units as independent variables and race time as dependent variable were performed and an equation was evolved to predict half-marathon race time. For men, half-marathon race time might be predicted by the equation (r(2) = 0.42, adjusted r(2) = 0.41, SE = 13.3) half-marathon race time (min) = 142.7 + 1.158 × percent body fat (%) - 5.223 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.71, p < 0.0001) to the achieved race time. For women, half-marathon race time might be predicted by the equation (r(2) = 0.68, adjusted r(2) = 0.68, SE = 9.8) race time (min) = 168.7 + 1.077 × percent body fat (%) - 7.556 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.89, p < 0.0001) to the achieved race time. The coefficients of determination of the models were slightly higher than for the existing equations. Future studies might include physiological variables to increase the coefficients of determination of the models.

  3. NIR camera and spectrograph SWIMS for TAO 6.5m telescope: array control system and its performance

    NASA Astrophysics Data System (ADS)

    Terao, Yasunori; Motohara, Kentaro; Konishi, Masahiro; Takahashi, Hidenori; Kato, Natsuko M.; Kitagawa, Yutaro; Kobayakawa, Yutaka; Ohashi, Hirofumi; Tateuchi, Ken; Todo, Soya

    2016-08-01

    SWIMS (Simultaneous-color Wide-field Infrared Multi-object Spectrograph) is a near-infrared imager and multi-object spectrograph as one of the first generation instruments for the University of Tokyo Atacama Observatory (TAO) 6.5m telescope. In this paper, we describe an array control system of SWIMS and results of detector noise performance evaluation. SWIMS incorporates four (and eight in future) HAWAII-2RG focal plane arrays for detectors, each driven by readout electronics components: a SIDECAR ASIC and a JADE2 Card. The readout components are controlled by a HAWAII-2RG Testing Software running on a virtual Windows machine on a Linux PC called array control PC. All of those array control PCs are then supervised by a SWIMS control PC. We have developed an "array control software system", which runs on the array control PC to control the HAWAII-2RG Testing Software, and consists of a socket client and a dedicated server called device manager. The client runs on the SWIMS control PC, and the device manager runs on the array control PC. An exposure command, issued by the client on the SWIMS control PC, is sent to the multiple device managers on the array control PCs, and then multiple HAWAII-2RGs are driven simultaneously. Using this system, we evaluate readout noise performances of the detectors, both in a test dewar and in a SWIMS main dewar. In the test dewar, we confirm the readout noise to be 4.3 e- r.m.s. by 32 times multiple sampling when we operate only a single HAWAII-2RG, whereas in the case of simultaneous driving of two HAWAII-2RGs, we still obtain sufficiently low readout noise of 10 e- r.m.s. In the SWIMS main dewar, although there are some differences between the detectors, the readout noise is measured to be 4:1-4:6 e- r.m.s. with simultaneous driving by 64 times multiple sampling, which meets the requirement for background-limited observations in J band of 14 e- r.m.s..

  4. The effects of multiple obstacles on the locomotor behavior and performance of a terrestrial lizard.

    PubMed

    Parker, Seth E; McBrayer, Lance D

    2016-04-01

    Negotiation of variable terrain is important for many small terrestrial vertebrates. Variation in the running surface resulting from obstacles (woody debris, vegetation, rocks) can alter escape paths and running performance. The ability to navigate obstacles likely influences survivorship through predator evasion success and other key ecological tasks (finding mates, acquiring food). Earlier work established that running posture and sprint performance are altered when organisms face an obstacle, and yet studies involving multiple obstacles are limited. Indeed, some habitats are cluttered with obstacles, whereas others are not. For many species, obstacle density may be important in predator escape and/or colonization potential by conspecifics. This study examines how multiple obstacles influence running behavior and locomotor posture in lizards. We predict that an increasing number of obstacles will increase the frequency of pausing and decrease sprint velocity. Furthermore, bipedal running over multiple obstacles is predicted to maintain greater mean sprint velocity compared with quadrupedal running, thereby revealing a potential advantage of bipedalism. Lizards were filmed running through a racetrack with zero, one or two obstacles. Bipedal running posture over one obstacle was significantly faster than quadrupedal posture. Bipedal running trials contained fewer total strides than quadrupedal ones. But on addition of a second obstacle, the number of bipedal strides decreased. Increasing obstacle number led to slower and more intermittent locomotion. Bipedalism provided clear advantages for one obstacle, but was not associated with further benefits for an additional obstacle. Hence, bipedalism helps mitigate obstacle negotiation, but not when numerous obstacles are encountered in succession. © 2016. Published by The Company of Biologists Ltd.

  5. Overall Preference of Running Shoes Can Be Predicted by Suitable Perception Factors Using a Multiple Regression Model.

    PubMed

    Tay, Cheryl Sihui; Sterzing, Thorsten; Lim, Chen Yen; Ding, Rui; Kong, Pui Wah

    2017-05-01

    This study examined (a) the strength of four individual footwear perception factors to influence the overall preference of running shoes and (b) whether these perception factors satisfied the nonmulticollinear assumption in a regression model. Running footwear must fulfill multiple functional criteria to satisfy its potential users. Footwear perception factors, such as fit and cushioning, are commonly used to guide shoe design and development, but it is unclear whether running-footwear users are able to differentiate one factor from another. One hundred casual runners assessed four running shoes on a 15-cm visual analogue scale for four footwear perception factors (fit, cushioning, arch support, and stability) as well as for overall preference during a treadmill running protocol. Diagnostic tests showed an absence of multicollinearity between factors, where values for tolerance ranged from .36 to .72, corresponding to variance inflation factors of 2.8 to 1.4. The multiple regression model of these four footwear perception variables accounted for 77.7% to 81.6% of variance in overall preference, with each factor explaining a unique part of the total variance. Casual runners were able to rate each footwear perception factor separately, thus assigning each factor a true potential to improve overall preference for the users. The results also support the use of a multiple regression model of footwear perception factors to predict overall running shoe preference. Regression modeling is a useful tool for running-shoe manufacturers to more precisely evaluate how individual factors contribute to the subjective assessment of running footwear.

  6. MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science

    NASA Astrophysics Data System (ADS)

    Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke

    2011-12-01

    We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.

  7. Performance Analysis of and Tool Support for Transactional Memory on BG/Q

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schindewolf, M

    2011-12-08

    Martin Schindewolf worked during his internship at the Lawrence Livermore National Laboratory (LLNL) under the guidance of Martin Schulz at the Computer Science Group of the Center for Applied Scientific Computing. We studied the performance of the TM subsystem of BG/Q as well as researched the possibilities for tool support for TM. To study the performance, we run CLOMP-TM. CLOMP-TM is a benchmark designed for the purpose to quantify the overhead of OpenMP and compare different synchronization primitives. To advance CLOMP-TM, we added Message Passing Interface (MPI) routines for a hybrid parallelization. This enables to run multiple MPI tasks, eachmore » running OpenMP, on one node. With these enhancements, a beneficial MPI task to OpenMP thread ratio is determined. Further, the synchronization primitives are ranked as a function of the application characteristics. To demonstrate the usefulness of these results, we investigate a real Monte Carlo simulation called Monte Carlo Benchmark (MCB). Applying the lessons learned yields the best task to thread ratio. Further, we were able to tune the synchronization by transactifying the MCB. Further, we develop tools that capture the performance of the TM run time system and present it to the application's developer. The performance of the TM run time system relies on the built-in statistics. These tools use the Blue Gene Performance Monitoring (BGPM) interface to correlate the statistics from the TM run time system with performance counter values. This combination provides detailed insights in the run time behavior of the application and enables to track down the cause of degraded performance. Further, one tool has been implemented that separates the performance counters in three categories: Successful Speculation, Unsuccessful Speculation and No Speculation. All of the tools are crafted around IBM's xlc compiler for C and C++ and have been run and tested on a Q32 early access system.« less

  8. Decentralized operating procedures for orchestrating data and behavior across distributed military systems and assets

    NASA Astrophysics Data System (ADS)

    Peach, Nicholas

    2011-06-01

    In this paper, we present a method for a highly decentralized yet structured and flexible approach to achieve systems interoperability by orchestrating data and behavior across distributed military systems and assets with security considerations addressed from the beginning. We describe an architecture of a tool-based design of business processes called Decentralized Operating Procedures (DOP) and the deployment of DOPs onto run time nodes, supporting the parallel execution of each DOP at multiple implementation nodes (fixed locations, vehicles, sensors and soldiers) throughout a battlefield to achieve flexible and reliable interoperability. The described method allows the architecture to; a) provide fine grain control of the collection and delivery of data between systems; b) allow the definition of a DOP at a strategic (or doctrine) level by defining required system behavior through process syntax at an abstract level, agnostic of implementation details; c) deploy a DOP into heterogeneous environments by the nomination of actual system interfaces and roles at a tactical level; d) rapidly deploy new DOPs in support of new tactics and systems; e) support multiple instances of a DOP in support of multiple missions; f) dynamically add or remove run-time nodes from a specific DOP instance as missions requirements change; g) model the passage of, and business reasons for the transmission of each data message to a specific DOP instance to support accreditation; h) run on low powered computers with lightweight tactical messaging. This approach is designed to extend the capabilities of existing standards, such as the Generic Vehicle Architecture (GVA).

  9. Estimating Angle-of-Arrival and Time-of-Flight for Multipath Components Using WiFi Channel State Information.

    PubMed

    Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil

    2018-05-29

    Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.

  10. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  11. UNificatins and Extensions of the Multiple Access Communications Problem,

    DTIC Science & Technology

    1981-07-01

    Control , Stability and Waiting Time in a Slotted ALOHA Random Access System ," IEEE...quceing, them, the control procedure must tolerate a larger average number of’ messages in the system if it is to limit the number of times that the system ...running fas- ter than real time to provide some flow control for that class . The virtual clocks for every other class merely act as a "gate" which

  12. Multiple Off-Ice Performance Variables Predict On-Ice Skating Performance in Male and Female Division III Ice Hockey Players.

    PubMed

    Janot, Jeffrey M; Beltz, Nicholas M; Dalleck, Lance D

    2015-09-01

    The purpose of this study was to determine if off-ice performance variables could predict on-ice skating performance in Division III collegiate hockey players. Both men (n = 15) and women (n = 11) hockey players (age = 20.5 ± 1.4 years) participated in the study. The skating tests were agility cornering S-turn, 6.10 m acceleration, 44.80 m speed, modified repeat skate, and 15.20 m full speed. Off-ice variables assessed were years of playing experience, height, weight and percent body fat and off-ice performance variables included vertical jump (VJ), 40-yd dash (36.58m), 1-RM squat, pro-agility, Wingate peak power and peak power percentage drop (% drop), and 1.5 mile (2.4km) run. Results indicated that 40-yd dash (36.58m), VJ, 1.5 mile (2.4km) run, and % drop were significant predictors of skating performance for repeat skate (slowest, fastest, and average time) and 44.80 m speed time, respectively. Four predictive equations were derived from multiple regression analyses: 1) slowest repeat skate time = 2.362 + (1.68 x 40-yd dash time) + (0.005 x 1.5 mile run), 2) fastest repeat skate time = 9.762 - (0.089 x VJ) - (0.998 x 40-yd dash time), 3) average repeat skate time = 7.770 + (1.041 x 40-yd dash time) - (0.63 x VJ) + (0.003 x 1.5 mile time), and 4) 47.85 m speed test = 7.707 - (0.050 x VJ) - (0.01 x % drop). It was concluded that selected off-ice tests could be used to predict on-ice performance regarding speed and recovery ability in Division III male and female hockey players. Key pointsThe 40-yd dash (36.58m) and vertical jump tests are significant predictors of on-ice skating performance specific to speed.In addition to 40-yd dash and vertical jump, the 1.5 mile (2.4km) run for time and percent power drop from the Wingate anaerobic power test were also significant predictors of skating performance that incorporates the aspect of recovery from skating activity.Due to the specificity of selected off-ice variables as predictors of on-ice performance, coaches can elect to assess player performance off-ice and focus on other uses of valuable ice time for their individual teams.

  13. The CHAT System: An OS/360 MVT Time-Sharing Subsystem for Displays and Teletype. Technical Progress Report.

    ERIC Educational Resources Information Center

    Schultz, Gary D.

    The design and operation of a time-sharing monitor are described. It runs under OS/360 MVT that supports multiple application program interaction with operators of CRT (cathode ray tube) display stations and of a teletype. Key design features discussed include: 1) an interface allowing application programs to be coded in either PL/I or assembler…

  14. Within-Subject Correlation Analysis to Detect Functional Areas Associated With Response Inhibition.

    PubMed

    Yamasaki, Tomoko; Ogawa, Akitoshi; Osada, Takahiro; Jimura, Koji; Konishi, Seiki

    2018-01-01

    Functional areas in fMRI studies are often detected by brain-behavior correlation, calculating across-subject correlation between the behavioral index and the brain activity related to a function of interest. Within-subject correlation analysis is also employed in a single subject level, which utilizes cognitive fluctuations in a shorter time period by correlating the behavioral index with the brain activity across trials. In the present study, the within-subject analysis was applied to the stop-signal task, a standard task to probe response inhibition, where efficiency of response inhibition can be evaluated by the stop-signal reaction time (SSRT). Since the SSRT is estimated, by definition, not in a trial basis but from pooled trials, the correlation across runs was calculated between the SSRT and the brain activity related to response inhibition. The within-subject correlation revealed negative correlations in the anterior cingulate cortex and the cerebellum. Moreover, the dissociation pattern was observed in the within-subject analysis when earlier vs. later parts of the runs were analyzed: negative correlation was dominant in earlier runs, whereas positive correlation was dominant in later runs. Regions of interest analyses revealed that the negative correlation in the anterior cingulate cortex, but not in the cerebellum, was dominant in earlier runs, suggesting multiple mechanisms associated with inhibitory processes that fluctuate on a run-by-run basis. These results indicate that the within-subject analysis compliments the across-subject analysis by highlighting different aspects of cognitive/affective processes related to response inhibition.

  15. Evidence for positive, but not negative, behavioral contrast with wheel-running reinforcement on multiple variable-ratio schedules.

    PubMed

    Belke, Terry W; Pierce, W David

    2016-12-01

    Rats responded on a multiple variable-ratio (VR) 10 VR 10 schedule of reinforcement in which lever pressing was reinforced by the opportunity to run in a wheel for 30s in both the changed (manipulated) and unchanged components. To generate positive contrast, the schedule of reinforcement in the changed component was shifted to extinction; to generate negative contrast, the schedule was shifted to VR 3. With the shift to extinction in the changed component, wheel-running and local lever-pressing rates increased in the unchanged component, a result supporting positive contrast; however, the shift to a VR 3 schedule in the changed component showed no evidence of negative contrast in the unaltered setting, only wheel running decreased in the unchanged component. Changes in wheel-running rates across components were consistent in showing a compensation effect, depending on whether the schedule manipulation increased or decreased opportunities for wheel running in the changed component. These findings are the first to demonstrate positive behavioral contrast on a multiple schedule with wheel running as reinforcement in both components. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. PalymSys (TM): An extended version of CLIPS for construction and reasoning using blackboards

    NASA Technical Reports Server (NTRS)

    Bryson, Travis; Ballard, Dan

    1994-01-01

    This paper describes PalymSys(TM) -- an extended version of the CLIPS language that is designed to facilitate the implementation of blackboard systems. The paper first describes the general characteristics of blackboards and shows how a control blackboard architecture can be used by AI systems to examine their own behavior and adapt to real-time problem-solving situations by striking a balance between domain and control reasoning. The paper then describes the use of PalymSys in the development of a situation assessment subsystem for use aboard Army helicopters. This system performs real-time inferencing about the current battlefield situation using multiple domain blackboards as well as a control blackboard. A description of the control and domain blackboards and their implementation is presented. The paper also describes modifications made to the standard CLIPS 6.02 language in PalymSys(TM) 2.0. These include: (1) a dynamic Dempster-Shafer belief network whose structure is completely specifiable at run-time in the consequent of a PalymSys(TM) rule, (2) extension of the run command including a continuous run feature that enables the system to run even when the agenda is empty, and (3) a built-in communications link that uses shared memory to communicate with other independent processes.

  17. Ring-Shaped Microlanes and Chemical Barriers as a Platform for Probing Single-Cell Migration.

    PubMed

    Schreiber, Christoph; Segerer, Felix J; Wagner, Ernst; Roidl, Andreas; Rädler, Joachim O

    2016-05-31

    Quantification and discrimination of pharmaceutical and disease-related effects on cell migration requires detailed characterization of single-cell motility. In this context, micropatterned substrates that constrain cells within defined geometries facilitate quantitative readout of locomotion. Here, we study quasi-one-dimensional cell migration in ring-shaped microlanes. We observe bimodal behavior in form of alternating states of directional migration (run state) and reorientation (rest state). Both states show exponential lifetime distributions with characteristic persistence times, which, together with the cell velocity in the run state, provide a set of parameters that succinctly describe cell motion. By introducing PEGylated barriers of different widths into the lane, we extend this description by quantifying the effects of abrupt changes in substrate chemistry on migrating cells. The transit probability decreases exponentially as a function of barrier width, thus specifying a characteristic penetration depth of the leading lamellipodia. Applying this fingerprint-like characterization of cell motion, we compare different cell lines, and demonstrate that the cancer drug candidate salinomycin affects transit probability and resting time, but not run time or run velocity. Hence, the presented assay allows to assess multiple migration-related parameters, permits detailed characterization of cell motility, and has potential applications in cell biology and advanced drug screening.

  18. Structure-seeking multilinear methods for the analysis of fMRI data.

    PubMed

    Andersen, Anders H; Rayens, William S

    2004-06-01

    In comprehensive fMRI studies of brain function, the data structures often contain higher-order ways such as trial, task condition, subject, and group in addition to the intrinsic dimensions of time and space. While multivariate bilinear methods such as principal component analysis (PCA) have been used successfully for extracting information about spatial and temporal features in data from a single fMRI run, the need to unfold higher-order data sets into bilinear arrays has led to decompositions that are nonunique and to the loss of multiway linkages and interactions present in the data. These additional dimensions or ways can be retained in multilinear models to produce structures that are unique and which admit interpretations that are neurophysiologically meaningful. Multiway analysis of fMRI data from multiple runs of a bilateral finger-tapping paradigm was performed using the parallel factor (PARAFAC) model. A trilinear model was fitted to a data cube of dimensions voxels by time by run. Similarly, a quadrilinear model was fitted to a higher-way structure of dimensions voxels by time by trial by run. The spatial and temporal response components were extracted and validated by comparison to results from traditional SVD/PCA analyses based on scenarios of unfolding into lower-order bilinear structures.

  19. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  20. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  1. The prediction of speed and incline in outdoor running in humans using accelerometry.

    PubMed

    Herren, R; Sparti, A; Aminian, K; Schutz, Y

    1999-07-01

    To explore whether triaxial accelerometric measurements can be utilized to accurately assess speed and incline of running in free-living conditions. Body accelerations during running were recorded at the lower back and at the heel by a portable data logger in 20 human subjects, 10 men, and 10 women. After parameterizing body accelerations, two neural networks were designed to recognize each running pattern and calculate speed and incline. Each subject ran 18 times on outdoor roads at various speeds and inclines; 12 runs were used to calibrate the neural networks whereas the 6 other runs were used to validate the model. A small difference between the estimated and the actual values was observed: the square root of the mean square error (RMSE) was 0.12 m x s(-1) for speed and 0.014 radiant (rad) (or 1.4% in absolute value) for incline. Multiple regression analysis allowed accurate prediction of speed (RMSE = 0.14 m x s(-1)) but not of incline (RMSE = 0.026 rad or 2.6% slope). Triaxial accelerometric measurements allows an accurate estimation of speed of running and incline of terrain (the latter with more uncertainty). This will permit the validation of the energetic results generated on the treadmill as applied to more physiological unconstrained running conditions.

  2. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  3. MULTIPLE SETS OF TWIN SLABS ON THE RUN OUT. THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MULTIPLE SETS OF TWIN SLABS ON THE RUN OUT. THE RUN OUT INCLUDES THE TRAVELING TORCH WHICH CUTS SLABS TO DESIRED LENGTH, AN IDENTIFICATION SYSTEM TO INDICATE HEAT NUMBER AND TRACE IDENTITY OF EVERY SLAB, AND A DEBURRING DEVICE TO SMOOTH SLABS. AT LEFT OF ROLLS IS THE DUMMY BAR. DUMMY BAR IS INSERTED UP THROUGH CONTAINMENT SECTION INTO MOLD PRIOR TO START OF CAST. WHEN STEEL IS INTRODUCED INTO MOLD IT CONNECTS WITH BAR AS CAST BEGINS, AT RUN OUT DUMMY BAR DISCONNECTS AND IS STORED. - U.S. Steel, Fairfield Works, Continuous Caster, Fairfield, Jefferson County, AL

  4. MULTIPLE SETS OF TWIN SLABS ON THE RUN OUT. THE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MULTIPLE SETS OF TWIN SLABS ON THE RUN OUT. THE RUN OUT INCLUDES THE TRAVELING TORCH WHICH CUTS SLABS TO DESIRED LENGTH, AN IDENTIFICATION SYSTEM TO INDICATE HEAT NUMBER AND TRACE IDENTITY OF EVERY SLAB, AND A DEBURRING DEVICE TO SMOOTH SLABS. AT LEFT OF ROLLS IS THE DUMMY BAR. DUMMY BAR IS INSERTED UP THROUGH CONTAINMENT SECTION INTO MOLD PRIOR TO START OF CAST. WHEN STEEL IS INTRODUCED INTO MOLD IT CONNECTS WITH BAR AS CAST BEGINS, AT RUN OUT DUMMY BAR DISCONNECTS AND IS STORED - U.S. Steel, Fairfield Works, Continuous Caster, Fairfield, Jefferson County, AL

  5. Effect of sucrose availability and pre-running on the intrinsic value of wheel running as an operant and a reinforcing consequence.

    PubMed

    Belke, Terry W; Pierce, W David

    2014-03-01

    The current study investigated the effect of motivational manipulations on operant wheel running for sucrose reinforcement and on wheel running as a behavioral consequence for lever pressing, within the same experimental context. Specifically, rats responded on a two-component multiple schedule of reinforcement in which lever pressing produced the opportunity to run in a wheel in one component of the schedule (reinforcer component) and wheel running produced the opportunity to consume sucrose solution in the other component (operant component). Motivational manipulations involved removal of sucrose contingent on wheel running and providing 1h of pre-session wheel running. Results showed that, in opposition to a response strengthening view, sucrose did not maintain operant wheel running. The motivational operations of withdrawing sucrose or providing pre-session wheel running, however, resulted in different wheel-running rates in the operant and reinforcer components of the multiple schedule; this rate discrepancy revealed the extrinsic reinforcing effects of sucrose on operant wheel running, but also indicated the intrinsic reinforcement value of wheel running across components. Differences in wheel-running rates between components were discussed in terms of arousal, undermining of intrinsic motivation, and behavioral contrast. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Xu, Zhou; Rodrigues, Brian

    As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.

  7. ChronQC: a quality control monitoring system for clinical next generation sequencing.

    PubMed

    Tawari, Nilesh R; Seow, Justine Jia Wen; Perumal, Dharuman; Ow, Jack L; Ang, Shimin; Devasia, Arun George; Ng, Pauline C

    2018-05-15

    ChronQC is a quality control (QC) tracking system for clinical implementation of next-generation sequencing (NGS). ChronQC generates time series plots for various QC metrics to allow comparison of current runs to historical runs. ChronQC has multiple features for tracking QC data including Westgard rules for clinical validity, laboratory-defined thresholds and historical observations within a specified time period. Users can record their notes and corrective actions directly onto the plots for long-term recordkeeping. ChronQC facilitates regular monitoring of clinical NGS to enable adherence to high quality clinical standards. ChronQC is freely available on GitHub (https://github.com/nilesh-tawari/ChronQC), Docker (https://hub.docker.com/r/nileshtawari/chronqc/) and the Python Package Index. ChronQC is implemented in Python and runs on all common operating systems (Windows, Linux and Mac OS X). tawari.nilesh@gmail.com or pauline.c.ng@gmail.com. Supplementary data are available at Bioinformatics online.

  8. Analysis of phospholipids in bio-oils and fats by hydrophilic interaction liquid chromatography-tandem mass spectrometry.

    PubMed

    Viidanoja, Jyrki

    2015-09-15

    A new, sensitive and selective liquid chromatography-electrospray ionization-tandem mass spectrometric (LC-ESI-MS/MS) method was developed for the analysis of Phospholipids (PLs) in bio-oils and fats. This analysis employs hydrophilic interaction liquid chromatography-scheduled multiple reaction monitoring (HILIC-sMRM) with a ZIC-cHILIC column. Eight PL class selective internal standards (homologs) were used for the semi-quantification of 14 PL classes for the first time. More than 400 scheduled MRMs were used for the measurement of PLs with a run time of 34min. The method's performance was evaluated for vegetable oil, animal fat and algae oil. The averaged within-run precision and between-run precision were ≤10% for all of the PL classes that had a direct homologue as an internal standard. The method accuracy was generally within 80-120% for the tested PL analytes in all three sample matrices. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Simulated fault injection - A methodology to evaluate fault tolerant microprocessor architectures

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.; Carreno, Victor A.

    1990-01-01

    A simulation-based fault-injection method for validating fault-tolerant microprocessor architectures is described. The approach uses mixed-mode simulation (electrical/logic analysis), and injects transient errors in run-time to assess the resulting fault impact. As an example, a fault-tolerant architecture which models the digital aspects of a dual-channel real-time jet-engine controller is used. The level of effectiveness of the dual configuration with respect to single and multiple transients is measured. The results indicate 100 percent coverage of single transients. Approximately 12 percent of the multiple transients affect both channels; none result in controller failure since two additional levels of redundancy exist.

  10. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  11. Multiple elastic scattering of electrons in condensed matter

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2017-01-01

    Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.

  12. SWATShare- A Platform for Collaborative Hydrology Research and Education with Cyber-enabled Sharing, Running and Visualization of SWAT Models

    NASA Astrophysics Data System (ADS)

    Rajib, M. A.; Merwade, V.; Song, C.; Zhao, L.; Kim, I. L.; Zhe, S.

    2014-12-01

    Setting up of any hydrologic model requires a large amount of efforts including compilation of all the data, creation of input files, calibration and validation. Given the amount of efforts involved, it is possible that models for a watershed get created multiple times by multiple groups or organizations to accomplish different research, educational or policy goals. To reduce the duplication of efforts and enable collaboration among different groups or organizations around an already existing hydrology model, a platform is needed where anyone can search for existing models, perform simple scenario analysis and visualize model results. The creator and users of a model on such a platform can then collaborate to accomplish new research or educational objectives. From this perspective, a prototype cyber-infrastructure (CI), called SWATShare, is developed for sharing, running and visualizing Soil Water Assessment Tool (SWAT) models in an interactive GIS-enabled web environment. Users can utilize SWATShare to publish or upload their own models, search and download existing SWAT models developed by others, run simulations including calibration using high performance resources provided by XSEDE and Cloud. Besides running and sharing, SWATShare hosts a novel spatio-temporal visualization system for SWAT model outputs. In temporal scale, the system creates time-series plots for all the hydrology and water quality variables available along the reach as well as in watershed-level. In spatial scale, the system can dynamically generate sub-basin level thematic maps for any variable at any user-defined date or date range; and thereby, allowing users to run animations or download the data for subsequent analyses. In addition to research, SWATShare can also be used within a classroom setting as an educational tool for modeling and comparing the hydrologic processes under different geographic and climatic settings. SWATShare is publicly available at https://www.water-hub.org/swatshare.

  13. Energy system contribution to 400-metre and 800-metre track running.

    PubMed

    Duffield, Rob; Dawson, Brian; Goodman, Carmel

    2005-03-01

    As a wide range of values has been reported for the relative energetics of 400-m and 800-m track running events, this study aimed to quantify the respective aerobic and anaerobic energy contributions to these events during track running. Sixteen trained 400-m (11 males, 5 females) and 11 trained 800-m (9 males, 2 females) athletes participated in this study. The participants performed (on separate days) a laboratory graded exercsie test and multiple race time-trials. The relative energy system contribution was calculated by multiple methods based upon measures of race VO2, accumulated oxygen deficit (AOD), blood lactate and estimated phosphocreatine degradation (lactate/PCr). The aerobic/anaerobic energy system contribution (AOD method) to the 400-m event was calculated as 41/59% (male) and 45/55% (female). For the 800-m event, an increased aerobic involvement was noted with a 60/40% (male) and 70/30% (female) respective contribution. Significant (P < 0.05) negative correlations were noted between race performance and anaerobic energy system involvement (lactate/PCr) for the male 800-m and female 400-m events (r = - 0.77 and - 0.87 respectively). These track running data compare well with previous estimates of the relative energy system contributions to the 400-m and 800-m events. Additionally, the relative importance and speed of interaction of the respective metabolic pathways has implications to training for these events.

  14. Influences of motorcycle rider and driver characteristics and road environment on red light running behavior at signalized intersections.

    PubMed

    Jensupakarn, Auearree; Kanitpong, Kunnawee

    2018-04-01

    In Thailand, red light running is considered as one of the most dangerous behaviors at intersection. Red light running (RLR) behavior is the failure to obey the traffic control signal. However, motorcycle riders and car drivers who are running through red lights could be influenced by human factors or road environment at intersection. RLR could be advertent or inadvertent behavior influenced by many factors. Little research study has been done to evaluate the contributing factors influencing the red-light violation behavior. This study aims to determine the factors influencing the red light running behavior including human characteristics, physical condition of intersection, traffic signal operation, and traffic condition. A total of 92 intersections were observed in Chiang Mai, Nakhon Ratchasima, and Chonburi, the major provinces in each region of Thailand. In addition, the socio-economic characteristics of red light runners were obtained from self-reported questionnaire survey. The Binary Logistic Regression and the Multiple Linear Regression models were used to determine the characteristics of red light runners and the factors influencing rates of red light running respectively. The results from this study can help to understand the characteristics of red light runners and factors affecting them to run red lights. For motorcycle riders and car drivers, age, gender, occupation, driving license, helmet/seatbelt use, and the probability to be penalized when running the red light significantly affect RLR behavior. In addition, the results indicated that vehicle travelling direction, time of day, existence of turning lane, number of lanes, lane width, intersection sight distance, type of traffic signal pole, type of traffic signal operation, length of yellow time interval, approaching speed, distance from intersection warning sign to stop line, and pavement roughness significantly affect RLR rates. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Build and Execute Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, Qiang

    At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less

  16. TIGER: Turbomachinery interactive grid generation

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Shih, Ming-Hsin; Janus, J. Mark

    1992-01-01

    A three dimensional, interactive grid generation code, TIGER, is being developed for analysis of flows around ducted or unducted propellers. TIGER is a customized grid generator that combines new technology with methods from general grid generation codes. The code generates multiple block, structured grids around multiple blade rows with a hub and shroud for either C grid or H grid topologies. The code is intended for use with a Euler/Navier-Stokes solver also being developed, but is general enough for use with other flow solvers. TIGER features a silicon graphics interactive graphics environment that displays a pop-up window, graphics window, and text window. The geometry is read as a discrete set of points with options for several industrial standard formats and NASA standard formats. Various splines are available for defining the surface geometries. Grid generation is done either interactively or through a batch mode operation using history files from a previously generated grid. The batch mode operation can be done either with a graphical display of the interactive session or with no graphics so that the code can be run on another computer system. Run time can be significantly reduced by running on a Cray-YMP.

  17. Running Injuries in the Participants of Ljubljana Marathon.

    PubMed

    Vitez, Luka; Zupet, Petra; Zadnik, Vesna; Drobnič, Matej

    2017-10-01

    The aim of our study was to determine the self-reported incidence and prevalence of running-related injuries among participants of the 18 th Ljubljana Marathon, and to identify risk factors for their occurrence. A customized questionnaire was distributed over registration. Independent samples of t-test and chi-square test were used to calculate the differences in risk factors occurrence in the injured and non-injured group. Factors which appeared significantly more frequently in the injured group were included further into multiple logistic regression analysis. The reported lifetime running injury (absence >2 weeks) incidence was: 46% none, 47% rarely, 4% occasionally, and 2% often. Most commonly injured body regions were: knee (30%), ankle and Achilles' tendon (24%), foot (15%), and calf (12%). Male gender, running history of 1-3 years, and history of previous injuries were risk factors for life-time running injury. In the season preceding the event, 65% of participants had not experienced any running injuries, 19% of them reported minor problems (max 2 weeks absenteeism), but 10% and 7% suffered from moderate (absence 3-4 weeks) or major (more than 4 weeks pause) injuries. BMI was identified as the solely risk factor. This self-reported study revealed a 53% lifetime prevalence of running-related injuries, with the predominate involvement of knee, ankle and Achilles' tendon. One out of three recreational runners experienced at least one minor running injury per season. It seems that male gender, short running experience, previous injury, and BMI do increase the probability for running-related injuries.

  18. Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations

    DTIC Science & Technology

    2007-08-31

    very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power

  19. Innovative Techniques to Predict Atmospheric Effects on Sensor Performance

    DTIC Science & Technology

    2009-10-15

    since acquiring the MRO data, extensive tabulation of all of the data from all visible satellites (generally, non- resolved ) was also accomplished...efficient code has been written to run multiple OSC simulations in less time . Data from many passes of the same satellite is useful for SOI, whether it is...the data analyzed. Questions about the data were resolved using OSC to determine solar phase angle (SPA), range, time of penumbra entrance/exit and

  20. The temporal representation of the delay of dynamic iterated rippled noise with positive and negative gain by single units in the ventral cochlear nucleus.

    PubMed

    Sayles, Mark; Winter, Ian Michael

    2007-09-26

    Spike trains were recorded from single units in the ventral cochlear nucleus of the anaesthetised guinea-pig in response to dynamic iterated rippled noise with positive and negative gain. The short-term running waveform autocorrelation functions of these stimuli show peaks at integer multiples of the time-varying delay when the gain is +1, and troughs at odd-integer multiples and peaks at even-integer multiples of the time-varying delay when the gain is -1. In contrast, the short-term autocorrelation of the Hilbert envelope shows peaks at integer multiples of the time-varying delay for both positive and negative gain stimuli. A running short-term all-order interspike interval analysis demonstrates the ability of single units to represent the modulated pitch contour in their short-term interval statistics. For units with low best frequency (approximate < or = 1.1 kHz) the temporal discharge pattern reflected the waveform fine structure regardless of unit classification (Primary-like, Chopper). For higher best frequency units the pattern of response varied according to unit type. Chopper units with best frequency approximate > or = 1.1 kHz responded to envelope modulation; showing no difference between their response to stimuli with positive and negative gain. Primary-like units with best frequencies in the range 1-3 kHz were still able to represent the difference in the temporal fine structure between dynamic rippled noise with positive and negative gain. No unit with a best frequency above 3 kHz showed a response to the temporal fine structure. Chopper units in this high frequency group showed significantly greater representation of envelope modulation relative to primary-like units with the same range of best frequencies. These results show that at the level of the cochlear nucleus there exists sufficient information in the time domain to represent the time-varying pitch associated with dynamic iterated rippled noise.

  1. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  2. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  3. A Simple Tool for the Design and Analysis of Multiple-Reflector Antennas in a Multi-Disciplinary Environment

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea

    2000-01-01

    The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.

  4. Multiple Off-Ice Performance Variables Predict On-Ice Skating Performance in Male and Female Division III Ice Hockey Players

    PubMed Central

    Janot, Jeffrey M.; Beltz, Nicholas M.; Dalleck, Lance D.

    2015-01-01

    The purpose of this study was to determine if off-ice performance variables could predict on-ice skating performance in Division III collegiate hockey players. Both men (n = 15) and women (n = 11) hockey players (age = 20.5 ± 1.4 years) participated in the study. The skating tests were agility cornering S-turn, 6.10 m acceleration, 44.80 m speed, modified repeat skate, and 15.20 m full speed. Off-ice variables assessed were years of playing experience, height, weight and percent body fat and off-ice performance variables included vertical jump (VJ), 40-yd dash (36.58m), 1-RM squat, pro-agility, Wingate peak power and peak power percentage drop (% drop), and 1.5 mile (2.4km) run. Results indicated that 40-yd dash (36.58m), VJ, 1.5 mile (2.4km) run, and % drop were significant predictors of skating performance for repeat skate (slowest, fastest, and average time) and 44.80 m speed time, respectively. Four predictive equations were derived from multiple regression analyses: 1) slowest repeat skate time = 2.362 + (1.68 x 40-yd dash time) + (0.005 x 1.5 mile run), 2) fastest repeat skate time = 9.762 - (0.089 x VJ) - (0.998 x 40-yd dash time), 3) average repeat skate time = 7.770 + (1.041 x 40-yd dash time) - (0.63 x VJ) + (0.003 x 1.5 mile time), and 4) 47.85 m speed test = 7.707 - (0.050 x VJ) - (0.01 x % drop). It was concluded that selected off-ice tests could be used to predict on-ice performance regarding speed and recovery ability in Division III male and female hockey players. Key points The 40-yd dash (36.58m) and vertical jump tests are significant predictors of on-ice skating performance specific to speed. In addition to 40-yd dash and vertical jump, the 1.5 mile (2.4km) run for time and percent power drop from the Wingate anaerobic power test were also significant predictors of skating performance that incorporates the aspect of recovery from skating activity. Due to the specificity of selected off-ice variables as predictors of on-ice performance, coaches can elect to assess player performance off-ice and focus on other uses of valuable ice time for their individual teams. PMID:26336338

  5. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  6. Structator: fast index-based search for RNA sequence-structure patterns

    PubMed Central

    2011-01-01

    Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640

  7. Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.

    1991-01-01

    The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.

  8. Contamination Analysis Tools

    NASA Technical Reports Server (NTRS)

    Brieda, Lubos

    2015-01-01

    This talk presents 3 different tools developed recently for contamination analysis:HTML QCM analyzer: runs in a web browser, and allows for data analysis of QCM log filesJava RGA extractor: can load in multiple SRS.ana files and extract pressure vs. time dataC++ Contamination Simulation code: 3D particle tracing code for modeling transport of dust particulates and molecules. Uses residence time to determine if molecules stick. Particulates can be sampled from IEST-STD-1246 and be accelerated by aerodynamic forces.

  9. Trunk muscle activation during moderate- and high-intensity running.

    PubMed

    Behm, David G; Cappa, Dario; Power, Geoffrey A

    2009-12-01

    Time constraints are cited as a barrier to regular exercise. If particular exercises can achieve multiple training functions, the number of exercises and the time needed to achieve a training goal may be decreased. It was the objective of this study to compare the extent of trunk muscle electromyographic (EMG) activity during running and callisthenic activities. EMG activity of the external obliques, lower abdominals (LA), upper lumbar erector spinae (ULES), and lumbosacral erector spinae (LSES) was monitored while triathletes and active nonrunners ran on a treadmill for 30 min at 60% and 80% of their maximum heart rate (HR) reserve, as well as during 30 repetitions of a partial curl-up and 3 min of a modified Biering-Sørensen back extension exercise. The mean root mean square (RMS) amplitude of the EMG signal was monitored over 10-s periods with measures normalized to a maximum voluntary contraction rotating curl-up (external obliques), hollowing exercise (LA), or back extension (ULES and LSES). A main effect for group was that triathletes had greater overall activation of the external obliques (p < 0.05), LA (p = 0.01), and LSES (p < 0.05) than did nonrunners. Main effects for exercise type showed that the external obliques had less EMG activity during 60% and 80% runs, respectively, than with the curl-ups (p = 0.001). The back extension exercise provided less ULES (p = 0.009) and LSES (p = 0.0001) EMG activity than the 60% and 80% runs, respectively. In conclusion, triathletes had greater trunk activation than nonrunners did while running, which could have contributed to their better performance. Back-stabilizing muscles can be activated more effectively with running than with a prolonged back extension activity. Running can be considered as an efficient, multifunctional exercise combining cardiovascular and trunk endurance benefits.

  10. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  11. "Running a Train": Adolescent Boys' Accounts of Sexual Intercourse Involving Multiple Males and One Female

    ERIC Educational Resources Information Center

    Rothman, Emily F.; Decker, Michele R.; Reed, Elizabeth; Raj, Anita; Silverman, Jay G.; Miller, Elizabeth

    2008-01-01

    The authors used qualitative research methods to explore the context and sexual risk behavior associated with sexual intercourse involving multiple males and one female, commonly called "running a train." Participants were 20 adolescent males aged 14 to 22 years who were either perpetrators of dating violence or perceived by teachers to…

  12. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Fischer–Tropsch Synthesis: XANES Spectra of Potassium in Promoted Precipitated Iron Catalysts as a Function of Time On-stream

    DOE PAGES

    Jacobs, Gary; Pendyala, Venkat Ramana Rao; Martinelli, Michela; ...

    2017-06-06

    XANES K-edge spectra of potassium promoter in precipitated Fe catalysts were acquired following activation by carburization in CO and as a function of time on-stream during the course of a Fischer–Tropsch synthesis run for a 100Fe:2K catalyst by withdrawing catalysts, sealed in wax product, for analysis. CO-activated and end-of-run spectra of the catalyst were also obtained for a 100Fe:5K catalyst. Peaks representing electronic transitions and multiple scattering were observed and resembled reference spectra for potassium carbonate or potassium formate. The shift in the multiple scattering peak to higher energy was consistent with sintering of potassium promoter during the course ofmore » the reaction test. The catalyst, however, retained its carbidic state, as demonstrated by XANES and EXAFS spectra at the iron K-edge, suggesting that sintering of potassium did not adversely affect the carburization rate, which is important for preventing iron carbides from oxidizing. This method serves as a starting point for developing better understanding of the chemical state and changes in structure occurring with alkali promoter.« less

  14. Running Injuries in the Participants of Ljubljana Marathon

    PubMed Central

    Vitez, Luka; Zupet, Petra; Zadnik, Vesna; Drobnič, Matej

    2017-01-01

    Abstract Introduction The aim of our study was to determine the self-reported incidence and prevalence of running-related injuries among participants of the 18th Ljubljana Marathon, and to identify risk factors for their occurrence. Methods A customized questionnaire was distributed over registration. Independent samples of t-test and chi-square test were used to calculate the differences in risk factors occurrence in the injured and non-injured group. Factors which appeared significantly more frequently in the injured group were included further into multiple logistic regression analysis. Results The reported lifetime running injury (absence >2 weeks) incidence was: 46% none, 47% rarely, 4% occasionally, and 2% often. Most commonly injured body regions were: knee (30%), ankle and Achilles’ tendon (24%), foot (15%), and calf (12%). Male gender, running history of 1-3 years, and history of previous injuries were risk factors for life-time running injury. In the season preceding the event, 65% of participants had not experienced any running injuries, 19% of them reported minor problems (max 2 weeks absenteeism), but 10% and 7% suffered from moderate (absence 3-4 weeks) or major (more than 4 weeks pause) injuries. BMI was identified as the solely risk factor. Conclusions This self-reported study revealed a 53% lifetime prevalence of running-related injuries, with the predominate involvement of knee, ankle and Achilles’ tendon. One out of three recreational runners experienced at least one minor running injury per season. It seems that male gender, short running experience, previous injury, and BMI do increase the probability for running-related injuries. PMID:29062393

  15. Migration trends of Sockeye Salmon at the northern edge of their distribution

    USGS Publications Warehouse

    Carey, Michael P.; Zimmerman, Christian E.; Keith, Kevin D.; Schelske, Merlyn; Lean, Charles; Douglas, David C.

    2017-01-01

    Climate change is affecting arctic and subarctic ecosystems, and anadromous fish such as Pacific salmon Oncorhynchus spp. are particularly susceptible due to the physiological challenge of spawning migrations. Predicting how migratory timing will change under Arctic warming scenarios requires an understanding of how environmental factors drive salmon migrations. Multiple mechanisms exist by which environmental conditions may influence migrating salmon, including altered migration cues from the ocean and natal river. We explored relationships between interannual variability and annual migration timing (2003–2014) of Sockeye Salmon O. nerka in a subarctic watershed with environmental conditions at broad, intermediate, and local spatial scales. Low numbers of Sockeye Salmon have returned to this high-latitude watershed in recent years, and run size has been a dominant influence on the migration duration and the midpoint date of the run. The duration of the migration upriver varied by as much as 25 d across years, and shorter run durations were associated with smaller run sizes. The duration of the migration was also extended with warmer sea surface temperatures in the staging area and lower values of the North Pacific Index. The midpoint date of the total run was earlier when the run size was larger, whereas the midpoint date was delayed during years in which river temperatures warmed earlier in the season. Documenting factors related to the migration of Sockeye Salmon near the northern limit of their range provides insights into the determinants of salmon migrations and suggests processes that could be important for determining future changes in arctic and subarctic ecosystems.

  16. Hiding the Disk and Network Latency of Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  17. Parallel matrix multiplication on the Connection Machine

    NASA Technical Reports Server (NTRS)

    Tichy, Walter F.

    1988-01-01

    Matrix multiplication is a computation and communication intensive problem. Six parallel algorithms for matrix multiplication on the Connection Machine are presented and compared with respect to their performance and processor usage. For n by n matrices, the algorithms have theoretical running times of O(n to the 2nd power log n), O(n log n), O(n), and O(log n), and require n, n to the 2nd power, n to the 2nd power, and n to the 3rd power processors, respectively. With careful attention to communication patterns, the theoretically predicted runtimes can indeed be achieved in practice. The parallel algorithms illustrate the tradeoffs between performance, communication cost, and processor usage.

  18. Object-oriented millisecond timers for the PC.

    PubMed

    Hamm, J P

    2001-11-01

    Object-oriented programming provides a useful structure for designing reusable code. Accurate millisecond timing is essential for many areas of research. With this in mind, this paper provides a Turbo Pascal unit containing an object-oriented millisecond timer. This approach allows for multiple timers to be running independently. The timers may also be set at different levels of temporal precision, such as 10(-3) (milliseconds) or 10(-5) sec. The object also is able to store the time of a flagged event for later examination without interrupting the ongoing timing operation.

  19. Search for Gravitational-wave Inspiral Signals Associated with Short Gamma-ray Bursts During LIGO's Fifth and Virgo's First Science Run

    NASA Astrophysics Data System (ADS)

    Abadie, J.; Abbott, B. P.; Abbott, R.; Accadia, T.; Acernese, F.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauer, Th. S.; Behnke, B.; Beker, M. G.; Belletoile, A.; Benacquista, M.; Betzwieser, J.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birindelli, S.; Biswas, R.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bullington, A.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cain, J.; Calloni, E.; Camp, J. B.; Campagna, E.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Cardenas, L.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande-Mottin, E.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R. C.; Cornish, N.; Corsi, A.; Coulon, J.-P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Drago, M.; Drever, R. W. P.; Driggers, J.; Dueck, J.; Duke, I.; Dumas, J.-C.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Faltas, Y.; Fan, Y.; Fazi, D.; Fehrmann, H.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flasch, K.; Foley, S.; Forrest, C.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Galimberti, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Goetz, E.; Goggin, L. M.; González, G.; Goßler, S.; Gouaty, R.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G. D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Hayler, T.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Ingram, D. R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khan, R.; Khazanov, E.; Kim, H.; King, P. J.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Lam, P. K.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Lei, M.; Leindecker, N.; Leonor, I.; Leroy, N.; Letendre, N.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Littenberg, T. B.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; MacInnis, M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Maksimovic, I.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Markowitz, J.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McKechan, D. J. A.; Mehmet, M.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishida, E.; Nishizawa, A.; Nocera, F.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pathak, D.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Predoi, V.; Principe, M.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raics, Z.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Rehbein, H.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sarin, P.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Stein, A. J.; Stein, L. C.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szokoly, G. P.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Turner, L.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; van Veggel, A. A.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Wilmut, I.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yu, P. P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2010-06-01

    Progenitor scenarios for short gamma-ray bursts (short GRBs) include coalescenses of two neutron stars or a neutron star and black hole, which would necessarily be accompanied by the emission of strong gravitational waves. We present a search for these known gravitational-wave signatures in temporal and directional coincidence with 22 GRBs that had sufficient gravitational-wave data available in multiple instruments during LIGO's fifth science run, S5, and Virgo's first science run, VSR1. We find no statistically significant gravitational-wave candidates within a [ - 5, + 1) s window around the trigger time of any GRB. Using the Wilcoxon-Mann-Whitney U-test, we find no evidence for an excess of weak gravitational-wave signals in our sample of GRBs. We exclude neutron star-black hole progenitors to a median 90% confidence exclusion distance of 6.7 Mpc.

  20. Software Framework for Advanced Power Plant Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Widmann; Sorin Munteanu; Aseem Jain

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less

  1. Quality of service routing in wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Sane, Sachin J.; Patcha, Animesh; Mishra, Amitabh

    2003-08-01

    An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Gary; Pendyala, Venkat Ramana Rao; Martinelli, Michela

    XANES K-edge spectra of potassium promoter in precipitated Fe catalysts were acquired following activation by carburization in CO and as a function of time on-stream during the course of a Fischer–Tropsch synthesis run for a 100Fe:2K catalyst by withdrawing catalysts, sealed in wax product, for analysis. CO-activated and end-of-run spectra of the catalyst were also obtained for a 100Fe:5K catalyst. Peaks representing electronic transitions and multiple scattering were observed and resembled reference spectra for potassium carbonate or potassium formate. The shift in the multiple scattering peak to higher energy was consistent with sintering of potassium promoter during the course ofmore » the reaction test. The catalyst, however, retained its carbidic state, as demonstrated by XANES and EXAFS spectra at the iron K-edge, suggesting that sintering of potassium did not adversely affect the carburization rate, which is important for preventing iron carbides from oxidizing. This method serves as a starting point for developing better understanding of the chemical state and changes in structure occurring with alkali promoter.« less

  3. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    PubMed

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  4. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  5. Online two-stage association method for robust multiple people tracking

    NASA Astrophysics Data System (ADS)

    Lv, Jingqin; Fang, Jiangxiong; Yang, Jie

    2011-07-01

    Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.

  6. THE ENGINE AND THE REAPER: INDUSTRIALIZATION AND MORTALITY IN LATE NINETEENTH CENTURY JAPAN.

    PubMed

    Tang, John P

    2017-12-01

    Economic development improves long-run health outcomes through access to medical treatment, sanitation, and higher income. Short run impacts, however, may be ambiguous given disease exposure from market integration. Using a panel dataset of Japanese vital statistics and multiple estimation methods, I find that railroad network expansion is associated with a six percent increase in gross mortality rates among newly integrated regions. Communicable diseases accounted for most of the rail-associated mortality, which indicate railways behaved as transmission vectors. At the same time, market integration facilitated by railways corresponded with an eighteen percent increase in total capital investment nationwide over ten years. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Multitasking for flows about multiple body configurations using the chimera grid scheme

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Morgan, R. L.

    1987-01-01

    The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.

  8. Differential and relaxed image foresting transform for graph-cut segmentation of multiple 3D objects.

    PubMed

    Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K

    2014-01-01

    Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.

  9. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    NASA Astrophysics Data System (ADS)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  10. Multiple Equilibria and Endogenous Cycles in a Non-Linear Harrodian Growth Model

    NASA Astrophysics Data System (ADS)

    Commendatore, Pasquale; Michetti, Elisabetta; Pinto, Antonio

    The standard result of Harrod's growth model is that, because investors react more strongly than savers to a change in income, the long run equilibrium of the economy is unstable. We re-interpret the Harrodian instability puzzle as a local instability problem and integrate his model with a nonlinear investment function. Multiple equilibria and different types of complex behaviour emerge. Moreover, even in the presence of locally unstable equilibria, for a large set of initial conditions the time path of the economy is not diverging, providing a solution to the instability puzzle.

  11. Defining the determinants of endurance running performance in the heat

    PubMed Central

    James, Carl A.; Hayes, Mark; Willmott, Ashley G. B.; Gibson, Oliver R.; Schlader, Zachary J.; Maxwell, Neil S.

    2017-01-01

    ABSTRACT In cool conditions, physiologic markers accurately predict endurance performance, but it is unclear whether thermal strain and perceived thermal strain modify the strength of these relationships. This study examined the relationships between traditional determinants of endurance performance and time to complete a 5-km time trial in the heat. Seventeen club runners completed graded exercise tests (GXT) in hot (GXTHOT; 32°C, 60% RH, 27.2°C WBGT) and cool conditions (GXTCOOL; 13°C, 50% RH, 9.3°C WBGT) to determine maximal oxygen uptake (V̇O2max), running economy (RE), velocity at V̇O2max (vV̇O2max), and running speeds corresponding to the lactate threshold (LT, 2 mmol.l−1) and lactate turnpoint (LTP, 4 mmol.l−1). Simultaneous multiple linear regression was used to predict 5 km time, using these determinants, indicating neither GXTHOT (R2 = 0.72) nor GXTCOOL (R2 = 0.86) predicted performance in the heat as strongly has previously been reported in cool conditions. vV̇O2max was the strongest individual predictor of performance, both when assessed in GXTHOT (r = −0.83) and GXTCOOL (r = −0.90). The GXTs revealed the following correlations for individual predictors in GXTHOT; V̇O2max r = −0.7, RE r = 0.36, LT r = −0.77, LTP r = −0.78 and in GXTCOOL; V̇O2max r = −0.67, RE r = 0.62, LT r = −0.79, LTP r = −0.8. These data indicate (i) GXTHOT does not predict 5 km running performance in the heat as strongly as a GXTCOOL, (ii) as in cool conditions, vV̇O2max may best predict running performance in the heat. PMID:28944273

  12. Operation in the turbulent jet field of a linear array of multiple rectangular jets using a two-dimensional jet (Variation of mean velocity field)

    NASA Astrophysics Data System (ADS)

    Fujita, Shigetaka; Harima, Takashi

    2016-03-01

    The mean flowfield of a linear array of multiple rectangular jets run through transversely with a two-dimensional jet, has been investigated, experimentally. The object of this experiment is to operate both the velocity scale and the length scale of the multiple rectangular jets using a two-dimensional jet. The reason of the adoption of this nozzle exit shape was caused by the reports of authors in which the cruciform nozzle promoted the inward secondary flows strongly on both the two jet axes. Aspect ratio of the rectangular nozzle used in this experiment was 12.5. Reynolds number based on the nozzle width d and the exit mean velocity Ue (≅ 39 m / s) was kept constant 25000. Longitudinal mean velocity was measured using an X-array Hot-Wire Probe (lh = 3.1 μm in diameter, dh = 0.6 mm effective length : dh / lh = 194) operated by the linearized constant temperature anemometers (DANTEC), and the spanwise and the lateral mean velocities were measured using a yaw meter. The signals from the anemometers were passed through the low-pass filters and sampled using A.D. converter. The processing of the signals was made by a personal computer. Acquisition time of the signals was usually 60 seconds. From this experiment, it was revealed that the magnitude of the inward secondary flows on both the y and z axes in the upstream region of the present jet was promoted by a two-dimensional jet which run through transversely perpendicular to the multiple rectangular jets, therefore the potential core length on the x axis of the present jet extended 2.3 times longer than that of the multiple rectangular jets, and the half-velocity width on the rectangular jet axis of the present jet was suppressed 41% shorter compared with that of the multiple rectangular jets.

  13. Design of a Distributed Microprocessor Sensor System

    DTIC Science & Technology

    1990-04-01

    implemented through these methods, multiversion software and recovery the use of multiple identical software tasks running on blocks, are intended to... Multiversion software for real-time systems tolerant microprocessor that uses three processing is discussed by Shepherd32, Hitt33, Avizienis’, and...tasks and the there are no data available to determine the cost third is used for noncritical tasks. If a discrepancy effectiveness of multiversion

  14. An algorithm for computing the gene tree probability under the multispecies coalescent and its application in the inference of population tree

    PubMed Central

    2016-01-01

    Motivation: Gene tree represents the evolutionary history of gene lineages that originate from multiple related populations. Under the multispecies coalescent model, lineages may coalesce outside the species (population) boundary. Given a species tree (with branch lengths), the gene tree probability is the probability of observing a specific gene tree topology under the multispecies coalescent model. There are two existing algorithms for computing the exact gene tree probability. The first algorithm is due to Degnan and Salter, where they enumerate all the so-called coalescent histories for the given species tree and the gene tree topology. Their algorithm runs in exponential time in the number of gene lineages in general. The second algorithm is the STELLS algorithm (2012), which is usually faster but also runs in exponential time in almost all the cases. Results: In this article, we present a new algorithm, called CompactCH, for computing the exact gene tree probability. This new algorithm is based on the notion of compact coalescent histories: multiple coalescent histories are represented by a single compact coalescent history. The key advantage of our new algorithm is that it runs in polynomial time in the number of gene lineages if the number of populations is fixed to be a constant. The new algorithm is more efficient than the STELLS algorithm both in theory and in practice when the number of populations is small and there are multiple gene lineages from each population. As an application, we show that CompactCH can be applied in the inference of population tree (i.e. the population divergence history) from population haplotypes. Simulation results show that the CompactCH algorithm enables efficient and accurate inference of population trees with much more haplotypes than a previous approach. Availability: The CompactCH algorithm is implemented in the STELLS software package, which is available for download at http://www.engr.uconn.edu/ywu/STELLS.html. Contact: ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307621

  15. SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.

    PubMed

    Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile

    2015-01-01

    In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.

  16. Just-in-time connectivity for large spiking networks.

    PubMed

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  17. Just in time connectivity for large spiking networks

    PubMed Central

    Lytton, William W.; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-01-01

    The scale of large neuronal network simulations is memory-limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically-relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed – just-in-time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON’s standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory-limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that only added items to the queue when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run. PMID:18533821

  18. Performance and accuracy of criticality calculations performed using WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    DOE PAGES

    Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...

    2017-05-01

    In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less

  19. Performance and accuracy of criticality calculations performed using WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola

    In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less

  20. Planning perception and action for cognitive mobile manipulators

    NASA Astrophysics Data System (ADS)

    Gaschler, Andre; Nogina, Svetlana; Petrick, Ronald P. A.; Knoll, Alois

    2013-12-01

    We present a general approach to perception and manipulation planning for cognitive mobile manipulators. Rather than hard-coding single purpose robot applications, a robot should be able to reason about its basic skills in order to solve complex problems autonomously. Humans intuitively solve tasks in real-world scenarios by breaking down abstract problems into smaller sub-tasks and use heuristics based on their previous experience. We apply a similar idea for planning perception and manipulation to cognitive mobile robots. Our approach is based on contingent planning and run-time sensing, integrated in our knowledge of volumes" planning framework, called KVP. Using the general-purpose PKS planner, we model information-gathering actions at plan time that have multiple possible outcomes at run time. As a result, perception and sensing arise as necessary preconditions for manipulation, rather than being hard-coded as tasks themselves. We demonstrate the e ectiveness of our approach on two scenarios covering visual and force sensing on a real mobile manipulator.

  1. Communication library for run-time visualization of distributed, asynchronous data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowlan, J.; Wightman, B.T.

    1994-04-01

    In this paper we present a method for collecting and visualizing data generated by a parallel computational simulation during run time. Data distributed across multiple processes is sent across parallel communication lines to a remote workstation, which sorts and queues the data for visualization. We have implemented our method in a set of tools called PORTAL (for Parallel aRchitecture data-TrAnsfer Library). The tools comprise generic routines for sending data from a parallel program (callable from either C or FORTRAN), a semi-parallel communication scheme currently built upon Unix Sockets, and a real-time connection to the scientific visualization program AVS. Our methodmore » is most valuable when used to examine large datasets that can be efficiently generated and do not need to be stored on disk. The PORTAL source libraries, detailed documentation, and a working example can be obtained by anonymous ftp from info.mcs.anl.gov from the file portal.tar.Z from the directory pub/portal.« less

  2. Improving Resource Selection and Scheduling Using Predictions. Chapter 1

    NASA Technical Reports Server (NTRS)

    Smith, Warren

    2003-01-01

    The introduction of computational grids has resulted in several new problems in the area of scheduling that can be addressed using predictions. The first problem is selecting where to run an application on the many resources available in a grid. Our approach to help address this problem is to provide predictions of when an application would start to execute if submitted to specific scheduled computer systems. The second problem is gaining simultaneous access to multiple computer systems so that distributed applications can be executed. We help address this problem by investigating how to support advance reservations in local scheduling systems. Our approaches to both of these problems are based on predictions for the execution time of applications on space- shared parallel computers. As a side effect of this work, we also discuss how predictions of application run times can be used to improve scheduling performance.

  3. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  4. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  5. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices

    PubMed Central

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.

    2018-01-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431

  6. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.

    PubMed

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B

    2017-06-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.

  7. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass- Or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood-allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  8. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  9. Characteristics and sensitivity analysis of multiple-time-resolved source patterns of PM2.5 with real time data using Multilinear Engine 2

    NASA Astrophysics Data System (ADS)

    Peng, Xing; Shi, Guo-Liang; Gao, Jian; Liu, Jia-Yuan; HuangFu, Yan-Qi; Ma, Tong; Wang, Hai-Ting; Zhang, Yue-Chong; Wang, Han; Li, Hui; Ivey, Cesunica E.; Feng, Yin-Chang

    2016-08-01

    With real time resolved data of Particulate matter (PM) and chemical species, understanding the source patterns and chemical characteristics is critical to establish controlling of PM. In this work, PM2.5 and chemical species were measured by corresponding online instruments with 1-h time resolution in Beijing. Multilinear Engine 2 (ME2) model was applied to explore the sources, and four sources (vehicle emission, crustal dust, secondary formation and coal combustion) were identified. To investigate the sensitivity of time resolution on the source contributions and chemical characteristics, ME2 was conducted with four time resolution runs (1-h, 2-h, 4-h, and 8-h). Crustal dust and coal combustion display large variation in the four time resolutions runs, with their contributions ranging from 6.7 to 10.4 μg m-3 and from 6.4 to 12.2 μg m-3, respectively. The contributions of vehicle emission and secondary formation range from 7.5 to 10.5 and from 14.7 to 16.7 μg m-3, respectively. The sensitivity analyses were conducted by principal component analysis-plot (PCA-plot), coefficient of divergence (CD), average absolute error (AAE) and correlation coefficients. For the four time resolution runs, the source contributions and profiles of crustal dust and coal combustion were more unstable than other source categories, possibly due to the lack of key markers of crustal dust and coal combustion (e.g. Si, Al). On the other hand, vehicle emission and crustal dust were more sensitive to time series of source contributions at different time resolutions. Findings in this study can improve our knowledge of source contributions and chemical characteristics at different time solutions.

  10. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship s flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm s design, along with mathematical models of the algorithm s performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  11. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  12. Minimalist shoe injuries: three case reports.

    PubMed

    Cauthon, David J; Langer, Paul; Coniglione, Thomas C

    2013-01-01

    Running in minimalist shoes continues to increase in popularity and multiple mainstream shoe companies now offer minimalist shoes. While there is no evidence that traditional running shoes prevent injuries, there are concerns that the designs of minimalist shoes may increase injury risk. However, reports of injuries in runners wearing minimalist shoes are rare. We present three injuries occurring in runners that were wearing minimalist shoes at the time of injury. All three of the runners switched immediately to the minimalist shoes with no transition period. We recommend that any transition to minimalist shoe gear be performed gradually. It is our contention that these injuries are quite common and will continue to become more prevalent as more runners change to these shoes. Copyright © 2013. Published by Elsevier Ltd.

  13. Creation of a retrospective job-exposure matrix using surrogate measures of exposure for a cohort of US career firefighters from San Francisco, Chicago and Philadelphia

    PubMed Central

    Dahm, Matthew M; Bertke, Stephen; Allee, Steve; Daniels, Robert D

    2015-01-01

    Objectives To construct a cohort-specific job-exposure matrix (JEM) using surrogate metrics of exposure for a cancer study on career firefighters from the Chicago, Philadelphia and San Francisco Fire Departments. Methods Departmental work history records, along with data on historical annual fire-runs and hours, were collected from 1950 to 2009 and coded into separate databases. These data were used to create a JEM based on standardised job titles and fire apparatus assignments using several surrogate exposure metrics to estimate firefighters’ exposure to the combustion byproducts of fire. The metrics included duration of exposure (cumulative time with a standardised exposed job title and assignment), fire-runs (cumulative events of potential fire exposure) and time at fire (cumulative hours of potential fire exposure). Results The JEM consisted of 2298 unique job titles alongside 16 174 fire apparatus assignments from the three departments, which were collapsed into 15 standardised job titles and 15 standardised job assignments. Correlations were found between fire-runs and time at fires (Pearson coefficient=0.92), duration of exposure and time at fires (Pearson coefficient=0.85), and duration of exposure and fire-runs (Pearson coefficient=0.82). Total misclassification rates were found to be between 16–30% when using duration of employment as an exposure surrogate, which has been traditionally used in most epidemiological studies, compared with using the duration of exposure surrogate metric. Conclusions The constructed JEM successfully differentiated firefighters based on gradient levels of potential exposure to the combustion byproducts of fire using multiple surrogate exposure metrics. PMID:26163543

  14. Creation of a retrospective job-exposure matrix using surrogate measures of exposure for a cohort of US career firefighters from San Francisco, Chicago and Philadelphia.

    PubMed

    Dahm, Matthew M; Bertke, Stephen; Allee, Steve; Daniels, Robert D

    2015-09-01

    To construct a cohort-specific job-exposure matrix (JEM) using surrogate metrics of exposure for a cancer study on career firefighters from the Chicago, Philadelphia and San Francisco Fire Departments. Departmental work history records, along with data on historical annual fire-runs and hours, were collected from 1950 to 2009 and coded into separate databases. These data were used to create a JEM based on standardised job titles and fire apparatus assignments using several surrogate exposure metrics to estimate firefighters' exposure to the combustion byproducts of fire. The metrics included duration of exposure (cumulative time with a standardised exposed job title and assignment), fire-runs (cumulative events of potential fire exposure) and time at fire (cumulative hours of potential fire exposure). The JEM consisted of 2298 unique job titles alongside 16,174 fire apparatus assignments from the three departments, which were collapsed into 15 standardised job titles and 15 standardised job assignments. Correlations were found between fire-runs and time at fires (Pearson coefficient=0.92), duration of exposure and time at fires (Pearson coefficient=0.85), and duration of exposure and fire-runs (Pearson coefficient=0.82). Total misclassification rates were found to be between 16-30% when using duration of employment as an exposure surrogate, which has been traditionally used in most epidemiological studies, compared with using the duration of exposure surrogate metric. The constructed JEM successfully differentiated firefighters based on gradient levels of potential exposure to the combustion byproducts of fire using multiple surrogate exposure metrics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. High-performance hardware implementation of a parallel database search engine for real-time peptide mass fingerprinting

    PubMed Central

    Bogdán, István A.; Rivers, Jenny; Beynon, Robert J.; Coca, Daniel

    2008-01-01

    Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk PMID:18453553

  16. Introduction of the ASGARD Code

    NASA Technical Reports Server (NTRS)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  17. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  18. Time-Dependent Erosion of Ion Optics

    NASA Technical Reports Server (NTRS)

    Wirz, Richard E.; Anderson, John R.; Katz, Ira; Goebel, Dan M.

    2008-01-01

    The accurate prediction of thruster life requires time-dependent erosion estimates for the ion optics assembly. Such information is critical to end-of-life mechanisms such as electron backstreaming. CEX2D was recently modified to handle time-dependent erosion, double ions, and multiple throttle conditions in a single run. The modified code is called "CEX2D-t". Comparisons of CEX2D-t results with LDT and ELT post-tests results show good agreement for both screen and accel grid erosion including important erosion features such as chamfering of the downstream end of the accel grid and reduced rate of accel grid aperture enlargement with time.

  19. Properties of the internal clock.

    PubMed

    Church, R M

    1984-01-01

    Evidence has been cited for the following properties of the parts of the psychological process used for timing intervals: The pacemaker has a mean rate that can be varied by drugs, diet, and stress. The switch has a latency to operate and it can be operated in various modes, such as run, stop, and reset. The accumulator times up, in absolute, arithmetic units. Working memory can be reset on command or, after lesions have been created in the fimbria fornix, when there is a gap in a signal. The transformation from the accumulator to reference memory is done with a multiplicative constant that is affected by drugs, lesions, and individual differences. The comparator uses a ratio between the value in the accumulator (or working memory) and reference memory. Finally, there must be multiple switch-accumulator modules to handle simultaneous temporal processing; and the psychological timing process may be used on some occasions and not on others.

  20. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    PubMed Central

    Churkin, Alexander; Barash, Danny

    2008-01-01

    Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289

  1. Eight-Week Battle Rope Training Improves Multiple Physical Fitness Dimensions and Shooting Accuracy in Collegiate Basketball Players.

    PubMed

    Chen, Wei-Han; Wu, Huey-June; Lo, Shin-Liang; Chen, Hui; Yang, Wen-Wen; Huang, Chen-Fu; Liu, Chiang

    2018-05-28

    Chen, WH, Wu, HJ, Lo, SL, Chen, H, Yang, WW, Huang, CF, and Liu, C. Eight-week battle rope training improves multiple physical fitness dimensions and shooting accuracy in collegiate basketball players. J Strength Cond Res XX(X): 000-000, 2018-Basketball players must possess optimally developed physical fitness in multiple dimensions and shooting accuracy. This study investigated whether (battle rope [BR]) training enhances multiple physical fitness dimensions, including aerobic capacity (AC), upper-body anaerobic power (AnP), upper-body and lower-body power, agility, and core muscle endurance, and shooting accuracy in basketball players and compared its effects with those of regular training (shuttle run [SR]). Thirty male collegiate basketball players were randomly assigned to the BR or SR groups (n = 15 per group). Both groups received 8-week interval training for 3 sessions per week; the protocol consisted of the same number of sets, exercise time, and rest interval time. The BR group exhibited significant improvements in AC (Progressive Aerobic Cardiovascular Endurance Run laps: 17.6%), upper-body AnP (mean power: 7.3%), upper-body power (basketball chest pass speed: 4.8%), lower-body power (jump height: 2.6%), core muscle endurance (flexion: 37.0%, extension: 22.8%, and right side bridge: 23.0%), and shooting accuracy (free throw: 14.0% and dynamic shooting: 36.2%). However, the SR group exhibited improvements in only AC (12.0%) and upper-body power (3.8%) (p < 0.05). The BR group demonstrated larger pre-post improvements in upper-body AnP (fatigue index) and dynamic shooting accuracy than the SR group did (p < 0.05). The BR group showed higher post-training performance in upper-body AnP (mean power and fatigue index) than the SR group did (p < 0.05). Thus, BR training effectively improves multiple physical fitness dimensions and shooting accuracy in collegiate basketball players.

  2. Effectiveness of Start to Run, a 6-week training program for novice runners, on increasing health-enhancing physical activity: a controlled study

    PubMed Central

    2013-01-01

    Background The use of the organized sports sector as a setting for health-promotion is a relatively new strategy. In the past few years, different countries have been investing resources in the organized sports sector for promoting health-enhancing physical activity. In the Netherlands, National Sports Federations were funded to develop and implement “easily accessible” sporting programs, aimed at the least active population groups. Start to Run, a 6-week training program for novice runners, developed by the Dutch Athletics Organization, is one of these programs. In this study, the effects of Start to Run on health-enhancing physical activity were investigated. Methods Physical activity levels of Start to Run participants were assessed by means of the Short QUestionnaire to ASsess Health-enhancing physical activity (SQUASH) at baseline, immediately after completing the program and six months after baseline. A control group, matched for age and sex, was assessed at baseline and after six months. Compliance with the Dutch physical activity guidelines was the primary outcome measure. Secondary outcome measures were the total time spent in physical activity and the time spent in each physical activity intensity category and domain. Changes in physical activity within groups were tested with paired t-tests and McNemar tests. Changes between groups were examined with multiple linear and logistic regression analyses. Results In the Start to Run group, the percentage of people who met the Dutch Norm for Health-enhancing Physical Activity, Fit-norm and Combi-norm increased significantly, both in the short- and longer-term. In the control group, no significant changes in physical activity were observed. When comparing results between groups, significantly more Start to Run participants compared with control group participants were meeting the Fit-norm and Combi-norm after six months. The differences in physical activity between groups in favor of the Start to Run group could be explained by an increase in the time spent in vigorous-intensity activities and sports activities. Conclusions Start to Run positively influences levels of health-enhancing physical activity of participants, both in the short- and longer-term. Based on these results, the use of the organized sports sector as a setting to promote health-enhancing physical activity seems promising. PMID:23898920

  3. Relativity theory and time perception: single or multiple clocks?

    PubMed

    Buhusi, Catalin V; Meck, Warren H

    2009-07-22

    Current theories of interval timing assume that humans and other animals time as if using a single, absolute stopwatch that can be stopped or reset on command. Here we evaluate the alternative view that psychological time is represented by multiple clocks, and that these clocks create separate temporal contexts by which duration is judged in a relative manner. Two predictions of the multiple-clock hypothesis were tested. First, that the multiple clocks can be manipulated (stopped and/or reset) independently. Second, that an event of a given physical duration would be perceived as having different durations in different temporal contexts, i.e., would be judged differently by each clock. Rats were trained to time three durations (e.g., 10, 30, and 90 s). When timing was interrupted by an unexpected gap in the signal, rats reset the clock used to time the "short" duration, stopped the "medium" duration clock, and continued to run the "long" duration clock. When the duration of the gap was manipulated, the rats reset these clocks in a hierarchical order, first the "short", then the "medium", and finally the "long" clock. Quantitative modeling assuming re-allocation of cognitive resources in proportion to the relative duration of the gap to the multiple, simultaneously timed event durations was used to account for the results. These results indicate that the three event durations were effectively timed by separate clocks operated independently, and that the same gap duration was judged relative to these three temporal contexts. Results suggest that the brain processes the duration of an event in a manner similar to Einstein's special relativity theory: A given time interval is registered differently by independent clocks dependent upon the context.

  4. Selecting and implementing the PBS scheduler on an SGI Onyx 2/Orgin 2000.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bittner, S.

    1999-06-28

    In the Mathematics and Computer Science Division at Argonne, the demand for resources on the Onyx 2 exceeds the resources available for consumption. To distribute these scarce resources effectively, we need a scheduling and resource management package with multiple capabilities. In particular, it must accept standard interactive user logins, allow batch jobs, backfill the system based on available resources, and permit system activities such as accounting to proceed without interruption. The package must include a mechanism to treat the graphic pipes as a schedulable resource. Also required is the ability to create advance reservations, offer dedicated system modes for largemore » resource runs and benchmarking, and track the resources consumed for each job run. Furthermore, our users want to be able to obtain repeatable timing results on job runs. And, of course, package costs must be carefully considered. We explored several options, including NQE and various third-party products, before settling on the PBS scheduler.« less

  5. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  6. Improving User Access to the Integrated Multi-Satellite Retrievals for GPM (IMERG) Products

    NASA Astrophysics Data System (ADS)

    Huffman, George; Bolvin, David; Nelkin, Eric; Kidd, Christopher

    2016-04-01

    The U.S. Global Precipitation Measurement mission (GPM) team has developed the Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithm to take advantage of the international constellation of precipitation-relevant satellites and the Global Precipitation Climatology Centre surface precipitation gauge analysis. The goal is to provide a long record of homogeneous, high-resolution quasi-global estimates of precipitation. While expert scientific researchers are major users of the IMERG products, it is clear that many other user communities and disciplines also desire access to the data for wide-ranging applications. Lessons learned during the Tropical Rainfall Measuring Mission, the predecessor to GPM, led to some basic design choices that provided the framework for supporting multiple user bases. For example, two near-real-time "runs" are computed, the Early and Late (currently 5 and 15 hours after observation time, respectively), then the Final Run about 3 months later. The datasets contain multiple fields that provide insight into the computation of the complete precipitation data field, as well as diagnostic (currently) estimates of the precipitation's phase. In parallel with this, the archive sites are working to provide the IMERG data in a variety of formats, and with subsetting and simple interactive analysis to make the data more easily available to non-expert users. The various options for accessing the data are summarized under the pmm.nasa.gov data access page. The talk will end by considering the feasibility of major user requests, including polar coverage, a simplified Data Quality Index, and reduced data latency for the Early Run. In brief, the first two are challenging, but under the team's control. The last requires significant action by some of the satellite data providers.

  7. Charged particle multiplicities in pp interactions at sqrt {s} = 0.9 , 2.36, and 7 TeV

    NASA Astrophysics Data System (ADS)

    Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Hammer, V. M.; Hammer, J.; Hänsel, S.; Hartl, C.; Hoch, M.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kasieczka, G.; Kiesenhofer, W.; Krammer, M.; Liko, D.; Mikulec, I.; Pernicka, M.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Teischinger, F.; Waltenberger, W.; Walzel, G.; Widl, E.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Benucci, L.; Ceard, L.; Cerny, K.; De Wolf, E. A.; Janssen, X.; Maes, T.; Mucibello, L.; Ochesanu, S.; Roland, B.; Rougny, R.; Selvaggi, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Adler, V.; Beauceron, S.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Devroede, O.; Kalogeropoulos, A.; Maes, J.; Maes, M.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Charaf, O.; Clerbaux, B.; De Lentdecker, G.; Dero, V.; Gay, A. P. R.; Hammad, G. H.; Hreus, T.; Marage, P. E.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wickens, J.; Costantini, S.; Grunewald, M.; Klein, B.; Marinov, A.; Ryckbosch, D.; Thyssen, F.; Tytgat, M.; Vanelderen, L.; Verwilligen, P.; Walsh, S.; Zaganidis, N.; Basegmez, S.; Bruno, G.; Caudron, J.; De Favereau De Jeneret, J.; Delaere, C.; Demin, P.; Favart, D.; Giammanco, A.; Grégoire, G.; Hollar, J.; Lemaitre, V.; Liao, J.; Militaru, O.; Ovyn, S.; Pagano, D.; Pin, A.; Piotrzkowski, K.; Quertenmont, L.; Schul, N.; Beliy, N.; Caebergs, T.; Daubie, E.; Alves, G. A.; De Jesus Damiao, D.; Pol, M. E.; Souza, M. H. G.; Carvalho, W.; Da Costa, E. M.; De Oliveira Martins, C.; De Souza, S. Fonseca; Mundim, L.; Nogima, H.; Oguri, V.; Da Silva, W. L. Prado; Santoro, A.; Silva Do Amaral, S. M.; Sznajder, A.; Torres Da Silva De Araujo, F.; Dias, F. A.; Dias, M. A. F.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Marinho, F.; Novaes, S. F.; Padula, S. S.; Darmenov, N.; Dimitrov, L.; Genchev, V.; Iaydjiev, P.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Tcholakov, V.; Trayanov, R.; Vankov, I.; Dyulendarova, M.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Marinova, E.; Mateev, M.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Jiang, C. H.; Liang, D.; Liang, S.; Wang, J.; Wang, J.; Wang, X.; Wang, Z.; Yang, M.; Zang, J.; Zhang, Z.; Ban, Y.; Guo, S.; Li, W.; Mao, Y.; Qian, S. J.; Teng, H.; Zhu, B.; Cabrera, A.; Gomez Moreno, B.; Ocampo Rios, A. A.; Osorio Oliveros, A. F.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Lelas, K.; Plestina, R.; Polic, D.; Puljak, I.; Antunovic, Z.; Dzelalija, M.; Brigljevic, V.; Duric, S.; Kadija, K.; Morovic, S.; Attikis, A.; Fereos, R.; Galanti, M.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Assran, Y.; Mahmoud, M. A.; Hektor, A.; Kadastik, M.; Kannike, K.; Müntel, M.; Raidal, M.; Rebane, L.; Azzolini, V.; Eerola, P.; Czellar, S.; Härkönen, J.; Heikkinen, A.; Karimäki, V.; Kinnunen, R.; Klem, J.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Ungaro, D.; Wendland, L.; Banzuzi, K.; Korpela, A.; Tuuva, T.; Sillou, D.; Besancon, M.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Ferri, F.; Ganjour, S.; Gentit, F. X.; Givernaud, A.; Gras, P.; de Monchenault, G. Hamel; Jarry, P.; Locci, E.; Malcles, J.; Marionneau, M.; Millischer, L.; Rander, J.; Rosowsky, A.; Shreyber, I.; Titov, M.; Verrecchia, P.; Baffioni, S.; Beaudette, F.; Bianchini, L.; Bluj, M.; Broutin, C.; Busson, P.; Charlot, C.; Dobrzynski, L.; de Cassagnac, R. Granier; Haguenauer, M.; Miné, P.; Mironov, C.; Ochando, C.; Paganini, P.; Sabes, S. Porteboeuf, D.; Salerno, R.; Sirois, Y.; Thiebaux, C.; Wyslouch, B.; Zabi, A.; Agram, J.-L.; Andrea, J.; Besson, A.; Bloch, D.; Bodin, D.; Brom, J.-M.; Cardaci, M.; Chabert, E. C.; Collard, C.; Conte, E.; Drouhin, F.; Ferro, C.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Greder, S.; Juillot, P.; Karim, M.; Le Bihan, A.-C.; Mikami, Y.; Van Hove, P.; Fassi, F.; Mercier, D.; Baty, C.; Beaupere, N.; Bedjidian, M.; Bondu, O.; Boudoul, G.; Boumediene, D.; Brun, H.; Chanon, N.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Falkiewicz, A.; Fay, J.; Gascon, S.; Ille, B.; Kurca, T.; Le Grand, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Sordini, V.; Tosi, S.; Tschudi, Y.; Verdier, P.; Xiao, H.; Roinishvili, V.; Anagnostou, G.; Edelhoff, M.; Feld, L.; Heracleous, N.; Hindrichs, O.; Jussen, R.; Klein, K.; Merz, J.; Mohr, N.; Ostapchuk, A.; Perieanu, A.; Raupach, F.; Sammet, J.; Schael, S.; Sprenger, D.; Weber, H.; Weber, M.; Wittmer, B.; Ata, M.; Bender, W.; Erdmann, M.; Frangenheim, J.; Hebbeker, T.; Hinzmann, A.; Hoepfner, K.; Hof, C.; Klimkovich, T.; Klingebiel, D.; Kreuzer, P.; Lanske, D.; Magass, C.; Masetti, G.; Merschmeyer, M.; Meyer, A.; Papacz, P.; Pieta, H.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Steggemann, J.; Teyssier, D.; Bontenackels, M.; Davids, M.; Duda, M.; Flügge, G.; Geenen, H.; Giffels, M.; Haj Ahmad, W.; Heydhausen, D.; Kress, T.; Kuessel, Y.; Linn, A.; Nowack, A.; Perchalla, L.; Pooth, O.; Rennefeld, J.; Sauerland, P.; Stahl, A.; Thomas, M.; Tornier, D.; Zoeller, M. H.; Aldaya Martin, M.; Behrenhoff, W.; Behrens, U.; Bergholz, M.; Borras, K.; Cakir, A.; Campbell, A.; Castro, E.; Dammann, D.; Eckerlin, G.; Eckstein, D.; Flossdorf, A.; Flucke, G.; Geiser, A.; Glushkov, I.; Hauk, J.; Jung, H.; Kasemann, M.; Katkov, I.; Katsas, P.; Kleinwort, C.; Kluge, H.; Knutsson, A.; Krücker, D.; Kuznetsova, E.; Lange, W.; Lohmann, W.; Mankel, R.; Marienfeld, M.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Olzem, J.; Parenti, A.; Raspereza, A.; Raval, A.; Schmidt, R.; Schoerner-Sadenius, T.; Sen, N.; Stein, M.; Tomaszewska, J.; Volyanskyy, D.; Walsh, R.; Wissing, C.; Autermann, C.; Bobrovskyi, S.; Draeger, J.; Enderle, H.; Gebbert, U.; Kaschube, K.; Kaussen, G.; Klanner, R.; Mura, B.; Naumann-Emme, S.; Nowak, F.; Pietsch, N.; Sander, C.; Schettler, H.; Schleper, P.; Schröder, M.; Schum, T.; Schwandt, J.; Srivastava, A. K.; Stadie, H.; Steinbrück, G.; Thomsen, J.; Wolf, R.; Bauer, J.; Buege, V.; Chwalek, T.; De Boer, W.; Dierlamm, A.; Dirkes, G.; Feindt, M.; Gruschke, J.; Hackstein, C.; Hartmann, F.; Heindl, S. M.; Heinrich, M.; Held, H.; Hoffmann, K. H.; Honc, S.; Kuhr, T.; Martschei, D.; Mueller, S.; Müller, Th.; Niegel, M.; Oberst, O.; Oehler, A.; Ott, J.; Peiffer, T.; Piparo, D.; Quast, G.; Rabbertz, K.; Ratnikov, F.; Renz, M.; Saout, C.; Scheurer, A.; Schieferdecker, P.; Schilling, F.-P.; Schott, G.; Simonis, H. J.; Stober, F. M.; Troendle, D.; Wagner-Kuhr, J.; Zeise, M.; Zhukov, V.; Ziebarth, E. B.; Daskalakis, G.; Geralis, T.; Kesisoglou, S.; Kyriakis, A.; Loukas, D.; Manolakos, I.; Markou, A.; Markou, C.; Mavrommatis, C.; Petrakou, E.; Gouskos, L.; Mertzimekis, T. J.; Panagiotou, A.; Evangelou, I.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Patras, V.; Triantis, F. A.; Aranyi, A.; Bencze, G.; Boldizsar, L.; Debreczeni, G.; Hajdu, C.; Horvath, D.; Kapusi, A.; Krajczar, K.; Laszlo, A.; Sikler, F.; Vesztergombi, G.; Beni, N.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Veszpremi, V.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Dhingra, N.; Jindal, M.; Kaur, M.; Kohli, J. M.; Mehta, M. Z.; Nishu, N.; Saini, L. K.; Sharma, A.; Singh, A. P.; Singh, J. B.; Singh, S. P.; Ahuja, S.; Bhattacharya, S.; Choudhary, B. C.; Gupta, P.; Jain, S.; Jain, S.; Kumar, A.; Shivpuri, R. K.; Choudhury, R. K.; Dutta, D.; Kailas, S.; Kataria, S. K.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Suggisetti, P.; Aziz, T.; Guchait, M.; Gurtu, A.; Maity, M.; Majumder, D.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Saha, A.; Sudhakar, K.; Wickramage, N.; Banerjee, S.; Dugad, S.; Mondal, N. K.; Arfaei, H.; Bakhshiansohi, H.; Etesami, S. M.; Fahim, A.; Hashemi, M.; Jafari, A.; Khakzad, M.; Mohammadi, A.; Mohammadi Najafabadi, M.; Paktinat Mehdiabadi, S.; Safarzadeh, B.; Zeinali, M.; Abbrescia, M.; Barbone, L.; Calabria, C.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Dimitrov, A.; Fedele, F.; Fiore, L.; Iaselli, G.; Lusito, L.; Maggi, G.; Maggi, M.; Manna, N.; Marangelli, B.; My, S.; Nuzzo, S.; Pacifico, N.; Pierro, G. A.; Pompili, A.; Pugliese, G.; Romano, F.; Roselli, G.; Selvaggi, G.; Silvestris, L.; Trentadue, R.; Tupputi, S.; Zito, G.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Giunta, M.; Grandi, C.; Marcellini, S.; Meneghelli, M.; Montanari, A.; Navarria, F. L.; Odorici, F.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Frosali, S.; Gallo, E.; Genta, C.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Colafranceschi, S.; Fabbri, F.; Piccolo, D.; Fabbricatore, P.; Musenich, R.; Benaglia, A.; Cerati, G. B.; De Guio, F.; Di Matteo, L.; Ghezzi, A.; Malberti, M.; Malvezzi, S.; Martelli, A.; Massironi, A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Sala, S.; de Fatis, T. Tabarelli; Tancini, V.; Buontempo, S.; Carrillo Montoya, C. A.; Cimmino, A.; De Cosa, A.; De Gruttola, M.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Merola, M.; Noli, P.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bellan, P.; Biasotto, M.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Conti, E.; De Mattia, M.; Dorigo, T.; Fanzago, F.; Gasparini, F.; Giubilato, P.; Gresele, A.; Lacaprara, S.; Lazzizzera, I.; Margoni, M.; Meneguzzo, A. T.; Nespolo, M.; Perrozzi, L.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Vanini, S.; Ventura, S.; Zotto, P.; Zumerle, G.; Baesso, P.; Berzano, U.; Riccardi, C.; Torre, P.; Vitulo, P.; Viviani, C.; Biasini, M.; Bilei, G. M.; Caponeri, B.; Fanò, L.; Lariccia, P.; Lucaroni, A.; Mantovani, G.; Menichelli, M.; Nappi, A.; Santocchia, A.; Servoli, L.; Taroni, S.; Valdata, M.; Volpe, R.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; D'Agnolo, R. T.; Dell'Orso, R.; Fiori, F.; Foà, L.; Giassi, A.; Kraan, A.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Palmonari, F.; Sarkar, S.; Segneri, G.; Serban, A. T.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Del Re, D.; Di Marco, E.; Diemoz, M.; Franci, D.; Grassi, M.; Longo, E.; Organtini, G.; Palma, A.; Pandolfi, F.; Paramatti, R.; Rahatlou, S.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Biino, C.; Botta, C.; Cartiglia, N.; Castello, R.; Costa, M.; Demaria, N.; Graziano, A.; Mariotti, C.; Marone, M.; Maselli, S.; Migliore, E.; Mila, G.; Monaco, V.; Musich, M.; Obertino, M. M.; Pastrone, N.; Pelliccioni, M.; Romero, A.; Ruspa, M.; Sacchi, R.; Sola, V.; Solano, A.; Staiano, A.; Trocino, D.; Vilela Pereira, A.; Ambroglini, F.; Belforte, S.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; Montanino, D.; Penzo, A.; Heo, S. G.; Chang, S.; Chung, J.; Kim, D. H.; Kim, G. N.; Kim, J. E.; Kong, D. J.; Park, H.; Son, D.; Son, D. C.; Kim, Zero; Kim, J. Y.; Song, S.; Choi, S.; Hong, B.; Jo, M.; Kim, H.; Kim, J. H.; Kim, T. J.; Lee, K. S.; Moon, D. H.; Park, S. K.; Rhee, H. B.; Seo, E.; Shin, S.; Sim, K. S.; Choi, M.; Kang, S.; Kim, H.; Park, C.; Park, I. C.; Park, S.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Lee, J.; Lee, S.; Seo, H.; Yu, I.; Bilinskas, M. J.; Grigelionis, I.; Janulis, M.; Martisiute, D.; Petrov, P.; Sabonis, T.; Castilla Valdez, H.; De La Cruz Burelo, E.; Lopez-Fernandez, R.; Sánchez Hernández, A.; Villasenor-Cendejas, L. M.; Carrillo Moreno, S.; Vazquez Valencia, F.; Salazar Ibarguen, H. A.; Casimiro Linares, E.; Morelos Pineda, A.; Reyes-Santos, M. A.; Allfrey, P.; Krofcheck, D.; Tam, J.; Butler, P. H.; Doesburg, R.; Silverwood, H.; Ahmad, M.; Ahmed, I.; Asghar, M. I.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Qazi, S.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Frueboes, T.; Gokieli, R.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Wrochna, G.; Zalewski, P.; Almeida, N.; David, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Martins, P.; Musella, P.; Nayak, A.; Ribeiro, P. Q.; Seixas, J.; Silva, P.; Varela, J.; Wöhri, H. K.; Belotelov, I.; Bunin, P.; Finger, M.; Finger, M.; Golutvin, I.; Kamenev, A.; Karjavin, V.; Kozlov, G.; Lanev, A.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Smirnov, V.; Volodko, A.; Zarubin, A.; Bondar, N.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Matveev, V.; Pashenkov, A.; Toropin, A.; Troitsky, S.; Epshteyn, V.; Gavrilov, V.; Kaftanov, V.; Kossov, M.; Krokhotin, A.; Lychkovskaya, N.; Safronov, G.; Semenov, S.; Stolin, V.; Vlasov, E.; Zhokin, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Petrushanko, S.; Sarycheva, L.; Savrin, V.; Snigirev, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Rusakov, S. V.; Vinogradov, A.; Azhgirey, I.; Bitioukov, S.; Grishin, V.; Kachanov, V.; Konstantinov, D.; Korablev, A.; Krychkine, V.; Petrov, V.; Ryutin, R.; Slabospitsky, S.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Djordjevic, M.; Krpic, D.; Milosevic, J.; Aguilar-Benitez, M.; Alcaraz Maestre, J.; Arce, P.; Battilana, C.; Calvo, E.; Cepeda, M.; Cerrada, M.; Colino, N.; De La Cruz, B.; Diez Pardos, C.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Ferrando, A.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Merino, G.; Puerta Pelayo, J.; Redondo, I.; Romero, L.; Santaolalla, J.; Willmott, C.; Albajar, C.; Codispoti, G.; de Trocóniz, J. F.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Lloret Iglesias, L.; Vizan Garcia, J. M.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Chamizo Llatas, M.; Chuang, S. H.; Duarte Campderros, J.; Felcini, M.; Fernandez, M.; Gomez, G.; Gonzalez Sanchez, J.; Gonzalez Suarez, R.; Jorda, C.; Lobelle Pardo, P.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Ruiz Jimeno, A.; Scodellaro, L.; Sobron Sanudo, M.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Baillon, P.; Ball, A. H.; Barney, D.; Bell, A. J.; Benedetti, D.; Bernet, C.; Bialas, W.; Bloch, P.; Bocci, A.; Bolognesi, S.; Breuker, H.; Brona, G.; Bunkowski, K.; Camporesi, T.; Cano, E.; Cerminara, G.; Christiansen, T.; Coarasa Perez, J. A.; Covarelli, R.; Curé, B.; D'Enterria, D.; Dahms, T.; De Roeck, A.; Duarte Ramos, F.; Elliott-Peisert, A.; Funk, W.; Gaddi, A.; Gennai, S.; Georgiou, G.; Gerwig, H.; Gigi, D.; Gill, K.; Giordano, D.; Glege, F.; Gomez-Reino Garrido, R.; Gouzevitch, M.; Govoni, P.; Gowdy, S.; Guiducci, L.; Hansen, M.; Harvey, J.; Hegeman, J.; Hegner, B.; Henderson, C.; Hoffmann, H. F.; Honma, A.; Innocente, V.; Janot, P.; Karavakis, E.; Lecoq, P.; Leonidopoulos, C.; Lourenço, C.; Macpherson, A.; Mäki, T.; Malgeri, L.; Mannelli, M.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moser, R.; Mozer, M. U.; Mulders, M.; Nesvold, E.; Nguyen, M.; Orimoto, T.; Orsini, L.; Perez, E.; Petrilli, A.; Pfeiffer, A.; Pierini, M.; Pimiä, M.; Polese, G.; Racz, A.; Rolandi, G.; Rommerskirchen, T.; Rovelli, C.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Segoni, I.; Sharma, A.; Siegrist, P.; Simon, M.; Sphicas, P.; Spiga, D.; Spiropulu, M.; Stöckli, F.; Stoye, M.; Tropea, P.; Tsirou, A.; Tsyganov, A.; Veres, G. I.; Vichoudis, P.; Voutilainen, M.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Gabathuler, K.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; König, S.; Kotlinski, D.; Langenegger, U.; Meier, F.; Renker, D.; Rohe, T.; Sibille, J.; Starodumov, A.; Bortignon, P.; Caminada, L.; Chen, Z.; Cittolin, S.; Dissertori, G.; Dittmar, M.; Eugster, J.; Freudenreich, K.; Grab, C.; Hervé, A.; Hintz, W.; Lecomte, P.; Lustermann, W.; Marchica, C.; del Arbol, P. Martinez Ruiz; Meridiani, P.; Milenovic, P.; Moortgat, F.; Nef, P.; Nessi-Tedaldi, F.; Pape, L.; Pauss, F.; Punz, T.; Rizzi, A.; Ronga, F. J.; Sala, L.; Sanchez, A. K.; Sawley, M. C.; Stieger, B.; Tauscher, L.; Thea, A.; Theofilatos, K.; Treille, D.; Urscheler, C.; Wallny, R.; Weber, M.; Wehrli, L.; Weng, J.; Aguiló, E.; Amsler, C.; Chiochia, V.; De Visscher, S.; Favaro, C.; Ivova Rikova, M.; Millan Mejias, B.; Regenfus, C.; Robmann, P.; Schmidt, A.; Snoek, H.; Wilke, L.; Chang, Y. H.; Chen, K. H.; Chen, W. T.; Dutta, S.; Go, A.; Kuo, C. M.; Li, S. W.; Lin, W.; Liu, M. H.; Liu, Z. K.; Lu, Y. J.; Wu, J. H.; Yu, S. S.; Bartalini, P.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Hou, W.-S.; Hsiung, Y.; Kao, K. Y.; Lei, Y. J.; Lu, R.-S.; Shiu, J. G.; Tzeng, Y. M.; Wang, M.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gökbulut, G.; Güler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Karaman, T.; Kayis Topaksu, A.; Nart, A.; Önengüt, G.; Ozdemir, K.; Ozturk, S.; Polatöz, A.; Sogut, K.; Tali, B.; Topakli, H.; Uzun, D.; Vergili, L. N.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Aliev, T.; Bimis, S.; Deniz, M.; Gamsizkan, H.; Guler, A. M.; Ocalan, K.; Ozpineci, A.; Serin, M.; Sever, R.; Surat, U. E.; Yildirim, E.; Zeyrek, M.; Deliomeroglu, M.; Demir, D.; Gülmez, E.; Halu, A.; Isildak, B.; Kaya, M.; Kaya, O.; Özbek, M.; Ozkorucuklu, S.; Sonmez, N.; Levchuk, L.; Bell, P.; Bostock, F.; Brooke, J. J.; Cheng, T. L.; Clement, E.; Cussans, D.; Frazier, R.; Goldstein, J.; Grimes, M.; Hansen, M.; Hartley, D.; Heath, G. P.; Heath, H. F.; Huckvale, B.; Jackson, J.; Kreczko, L.; Metson, S.; Newbold, D. M.; Nirunpong, K.; Poll, A.; Senkin, S.; Smith, V. J.; Ward, S.; Basso, L.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Camanzi, B.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Kennedy, B. W.; Olaiya, E.; Petyt, D.; Radburn-Smith, B. C.; Shepherd-Themistocleous, C. H.; Tomalin, I. R.; Womersley, W. J.; Worm, S. D.; Bainbridge, R.; Ball, G.; Ballin, J.; Beuselinck, R.; Buchmuller, O.; Colling, D.; Cripps, N.; Cutajar, M.; Davies, G.; Della Negra, M.; Fulcher, J.; Futyan, D.; Guneratne Bryer, A.; Hall, G.; Hatherell, Z.; Hays, J.; Iles, G.; Karapostoli, G.; Lyons, L.; Magnan, A.-M.; Marrouche, J.; Nandi, R.; Nash, J.; Nikitenko, A.; Papageorgiou, A.; Pesaresi, M.; Petridis, K.; Pioppi, M.; Raymond, D. M.; Rompotis, N.; Rose, A.; Ryan, M. J.; Seez, C.; Sharp, P.; Sparrow, A.; Tapper, A.; Tourneur, S.; Vazquez Acosta, M.; Virdee, T.; Wakefield, S.; Wardrope, D.; Whyntie, T.; Barrett, M.; Chadwick, M.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leslie, D.; Martin, W.; Reid, I. D.; Teodorescu, L.; Hatakeyama, K.; Bose, T.; Carrera Jarrin, E.; Clough, A.; Fantasia, C.; Heister, A.; John, J. St.; Lawson, P.; Lazic, D.; Rohlf, J.; Sperka, D.; Sulak, L.; Avetisyan, A.; Bhattacharya, S.; Chou, J. P.; Cutts, D.; Esen, S.; Ferapontov, A.; Heintz, U.; Jabeen, S.; Kukartsev, G.; Landsberg, G.; Narain, M.; Nguyen, D.; Segala, M.; Speer, T.; Tsang, K. V.; Borgia, M. A.; Breedon, R.; De La Barca Sanchez, M. Calderon; Cebra, D.; Chauhan, S.; Chertok, M.; Conway, J.; Cox, P. T.; Dolen, J.; Erbacher, R.; Friis, E.; Ko, W.; Kopecky, A.; Lander, R.; Liu, H.; Maruyama, S.; Miceli, T.; Nikolic, M.; Pellett, D.; Robles, J.; Schwarz, T.; Searle, M.; Smith, J.; Squires, M.; Tripathi, M.; Vasquez Sierra, R.; Veelken, C.; Andreev, V.; Arisaka, K.; Cline, D.; Cousins, R.; Deisher, A.; Duris, J.; Erhan, S.; Farrell, C.; Hauser, J.; Ignatenko, M.; Jarvis, C.; Plager, C.; Rakness, G.; Schlein, P.; Tucker, J.; Valuev, V.; Babb, J.; Clare, R.; Ellison, J.; Gary, J. W.; Giordano, F.; Hanson, G.; Jeng, G. Y.; Kao, S. C.; Liu, F.; Liu, H.; Luthra, A.; Nguyen, H.; Pasztor, G.; Satpathy, A.; Shen, B. C.; Stringer, R.; Sturdy, J.; Sumowidagdo, S.; Wilken, R.; Wimpenny, S.; Andrews, W.; Branson, J. G.; Dusinberre, E.; Evans, D.; Golf, F.; Holzner, A.; Kelley, R.; Lebourgeois, M.; Letts, J.; Mangano, B.; Muelmenstaedt, J.; Padhi, S.; Palmer, C.; Petrucciani, G.; Pi, H.; Pieri, M.; Ranieri, R.; Sani, M.; Sharma, V.; Simon, S.; Tu, Y.; Vartak, A.; Würthwein, F.; Yagil, A.; Barge, D.; Bellan, R.; Campagnari, C.; D'Alfonso, M.; Danielson, T.; Geffert, P.; Incandela, J.; Justus, C.; Kalavase, P.; Koay, S. A.; Kovalskyi, D.; Krutelyov, V.; Lowette, S.; Mccoll, N.; Pavlunin, V.; Rebassoo, F.; Ribnik, J.; Richman, J.; Rossin, R.; Stuart, D.; To, W.; Vlimant, J. R.; Bornheim, A.; Bunn, J.; Chen, Y.; Gataullin, M.; Kcira, D.; Litvine, V.; Ma, Y.; Mott, A.; Newman, H. B.; Rogan, C.; Timciuc, V.; Traczyk, P.; Veverka, J.; Wilkinson, R.; Yang, Y.; Zhu, R. Y.; Akgun, B.; Carroll, R.; Ferguson, T.; Iiyama, Y.; Jang, D. W.; Jun, S. Y.; Liu, Y. F.; Paulini, M.; Russ, J.; Terentyev, N.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Dinardo, M. E.; Drell, B. R.; Edelmaier, C. J.; Ford, W. T.; Heyburn, B.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Zang, S. L.; Agostino, L.; Alexander, J.; Chatterjee, A.; Das, S.; Eggert, N.; Fields, L. J.; Gibbons, L. K.; Heltsley, B.; Hopkins, W.; Khukhunaishvili, A.; Kreis, B.; Kuznetsov, V.; Nicolas Kaufman, G.; Patterson, J. R.; Puigh, D.; Riley, D.; Ryd, A.; Shi, X.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Vaughan, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Biselli, A.; Cirino, G.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Atac, M.; Bakken, J. A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bloch, I.; Borcherding, F.; Burkett, K.; Butler, J. N.; Chetluru, V.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Demarteau, M.; Eartly, D. P.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Gottschalk, E.; Green, D.; Gunthoti, K.; Gutsche, O.; Hahn, A.; Hanlon, J.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; James, E.; Jensen, H.; Johnson, M.; Joshi, U.; Khatiwada, R.; Kilminster, B.; Klima, B.; Kousouris, K.; Kunori, S.; Kwan, S.; Limon, P.; Lipton, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Mason, D.; McBride, P.; McCauley, T.; Miao, T.; Mishra, K.; Mrenna, S.; Musienko, Y.; Newman-Holmes, C.; O'Dell, V.; Popescu, S.; Pordes, R.; Prokofyev, O.; Saoulidou, N.; Sexton-Kennedy, E.; Sharma, S.; Soha, A.; Spalding, W. J.; Spiegel, L.; Tan, P.; Taylor, L.; Tkaczyk, S.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitmore, J.; Wu, W.; Yang, F.; Yumiceva, F.; Yun, J. C.; Acosta, D.; Avery, P.; Bourilkov, D.; Chen, M.; Di Giovanni, G. P.; Dobur, D.; Drozdetskiy, A.; Field, R. D.; Fisher, M.; Fu, Y.; Furic, I. K.; Gartner, J.; Goldberg, S.; Kim, B.; Klimenko, S.; Konigsberg, J.; Korytov, A.; Kropivnitskaya, A.; Kypreos, T.; Matchev, K.; Mitselmakher, G.; Muniz, L.; Pakhotin, Y.; Prescott, C.; Remington, R.; Schmitt, M.; Scurlock, B.; Sellers, P.; Skhirtladze, N.; Wang, D.; Yelton, J.; Zakaria, M.; Ceron, C.; Gaultney, V.; Kramer, L.; Lebolo, L. M.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bandurin, D.; Bochenek, J.; Chen, J.; Diamond, B.; Gleyzer, S. V.; Haas, J.; Hagopian, S.; Hagopian, V.; Jenkins, M.; Johnson, K. F.; Prosper, H.; Sekmen, S.; Veeraraghavan, V.; Baarmand, M. M.; Dorney, B.; Guragain, S.; Hohlmann, M.; Kaakhety, H.; Ralich, R.; Vodopiyanov, I.; Adams, M. R.; Anghel, I. M.; Apanasevich, L.; Bai, Y.; Bazterra, V. E.; Betts, R. R.; Callner, J.; Cavanaugh, R.; Dragoiu, C.; Garcia-Solis, E. J.; Gerber, C. E.; Hofman, D. J.; Khalatyan, S.; Lacroix, F.; O'Brien, C.; Silvestre, C.; Smoron, A.; Strom, D.; Varelas, N.; Akgun, U.; Albayrak, E. A.; Bilki, B.; Cankocak, K.; Clarida, W.; Duru, F.; Lae, C. K.; McCliment, E.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Newsom, C. R.; Norbeck, E.; Olson, J.; Onel, Y.; Ozok, F.; Sen, S.; Wetzel, J.; Yetkin, T.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bonato, A.; Eskew, C.; Fehling, D.; Giurgiu, G.; Gritsan, A. V.; Guo, Z. J.; Hu, G.; Maksimovic, P.; Rappoccio, S.; Swartz, M.; Tran, N. V.; Whitbeck, A.; Baringer, P.; Bean, A.; Benelli, G.; Grachov, O.; Murray, M.; Noonan, D.; Radicci, V.; Sanders, S.; Wood, J. S.; Zhukova, V.; Bolton, T.; Chakaberia, I.; Ivanov, A.; Makouski, M.; Maravin, Y.; Shrestha, S.; Svintradze, I.; Wan, Z.; Gronberg, J.; Lange, D.; Wright, D.; Baden, A.; Boutemeur, M.; Eno, S. C.; Ferencek, D.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kirn, M.; Lu, Y.; Mignerey, A. C.; Rossato, K.; Rumerio, P.; Santanastasio, F.; Skuja, A.; Temple, J.; Tonjes, M. B.; Tonwar, S. C.; Twedt, E.; Alver, B.; Bauer, G.; Bendavid, J.; Busza, W.; Butz, E.; Cali, I. A.; Chan, M.; Dutta, V.; Everaerts, P.; Gomez Ceballos, G.; Goncharov, M.; Hahn, K. A.; Harris, P.; Kim, Y.; Klute, M.; Lee, Y.-J.; Li, W.; Loizides, C.; Luckey, P. D.; Ma, T.; Nahn, S.; Paus, C.; Roland, C.; Roland, G.; Rudolph, M.; Stephans, G. S. F.; Sumorok, K.; Sung, K.; Wenger, E. A.; Xie, S.; Yang, M.; Yilmaz, Y.; Yoon, A. S.; Zanetti, M.; Cole, P.; Cooper, S. I.; Cushman, P.; Dahmes, B.; De Benedetti, A.; Dudero, P. R.; Franzoni, G.; Haupt, J.; Klapoetke, K.; Kubota, Y.; Mans, J.; Rekovic, V.; Rusack, R.; Sasseville, M.; Singovsky, A.; Cremaldi, L. M.; Godang, R.; Kroeger, R.; Perera, L.; Rahmat, R.; Sanders, D. A.; Summers, D.; Bloom, K.; Bose, S.; Butt, J.; Claes, D. R.; Dominguez, A.; Eads, M.; Keller, J.; Kelly, T.; Kravchenko, I.; Lazo-Flores, J.; Lundstedt, C.; Malbouisson, H.; Malik, S.; Snow, G. R.; Baur, U.; Godshalk, A.; Iashvili, I.; Kharchilava, A.; Kumar, A.; Smith, K.; Alverson, G.; Barberis, E.; Baumgartel, D.; Boeriu, O.; Chasco, M.; Kaadze, K.; Reucroft, S.; Swain, J.; Wood, D.; Zhang, J.; Anastassov, A.; Kubik, A.; Odell, N.; Ofierzynski, R. A.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Velasco, M.; Won, S.; Antonelli, L.; Berry, D.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kolb, J.; Kolberg, T.; Lannon, K.; Luo, W.; Lynch, S.; Marinelli, N.; Morse, D. M.; Pearson, T.; Ruchti, R.; Slaunwhite, J.; Valls, N.; Warchol, J.; Wayne, M.; Ziegler, J.; Bylsma, B.; Durkin, L. S.; Gu, J.; Hill, C.; Killewald, P.; Kotov, K.; Ling, T. Y.; Rodenburg, M.; Williams, G.; Adam, N.; Berry, E.; Elmer, P.; Gerbaudo, D.; Halyo, V.; Hebda, P.; Hunt, A.; Jones, J.; Laird, E.; Lopes Pegna, D.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Acosta, J. G.; Huang, X. T.; Lopez, A.; Mendez, H.; Oliveros, S.; Ramirez Vargas, J. E.; Zatserklyaniy, A.; Alagoz, E.; Barnes, V. E.; Bolla, G.; Borrello, L.; Bortoletto, D.; Everett, A.; Garfinkel, A. F.; Gecse, Z.; Gutay, L.; Jones, M.; Koybasi, O.; Laasanen, A. T.; Leonardo, N.; Liu, C.; Maroussov, V.; Merkel, P.; Miller, D. H.; Neumeister, N.; Potamianos, K.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Yoo, H. D.; Zablocki, J.; Zheng, Y.; Jindal, P.; Parashar, N.; Boulahouache, C.; Cuplov, V.; Ecklund, K. M.; Geurts, F. J. M.; Liu, J. H.; Morales, J.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Chung, Y. S.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Flacher, H.; Garcia-Bellido, A.; Goldenzweig, P.; Gotra, Y.; Han, J.; Harel, A.; Miner, D. C.; Orbaker, D.; Petrillo, G.; Vishnevskiy, D.; Zielinski, M.; Bhatti, A.; Demortier, L.; Goulianos, K.; Lungu, G.; Mesropian, C.; Yan, M.; Atramentov, O.; Barker, A.; Duggan, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Hits, D.; Lath, A.; Panwalkar, S.; Patel, R.; Richards, A.; Rose, K.; Schnetzer, S.; Somalwar, S.; Stone, R.; Thomas, S.; Cerizza, G.; Hollingsworth, M.; Spanier, S.; Yang, Z. C.; York, A.; Asaadi, J.; Eusebi, R.; Gilmore, J.; Gurrola, A.; Kamon, T.; Khotilovich, V.; Montalvo, R.; Nguyen, C. N.; Pivarski, J.; Safonov, A.; Sengupta, S.; Tatarinov, A.; Toback, D.; Weinberger, M.; Akchurin, N.; Bardak, C.; Damgov, J.; Jeong, C.; Kovitanggoon, K.; Lee, S. W.; Mane, P.; Roh, Y.; Sill, A.; Volobouev, I.; Wigmans, R.; Yazgan, E.; Appelt, E.; Brownson, E.; Engh, D.; Florez, C.; Gabella, W.; Johns, W.; Kurt, P.; Maguire, C.; Mel, A.; Sheldon, P.; Velkovska, J.; Arenton, M. W.; Balazs, M.; Boutle, S.; Buehler, M.; Conetti, S.; Cox, B.; Francis, B.; Hirosky, R.; Ledovskoy, A.; Lin, C.; Neu, C.; Yohay, R.; Gollapinni, S.; Harr, R.; Karchin, P. E.; Mattson, M.; Milstène, C.; Sakharov, A.; Anderson, M.; Bachtis, M.; Bellinger, J. N.; Carlsmith, D.; Dasu, S.; Efron, J.; Gray, L.; Grogg, K. S.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Klabbers, P.; Klukas, J.; Lanaro, A.; Lazaridis, C.; Leonard, J.; Lomidze, D.; Loveless, R.; Mohapatra, A.; Parker, W.; Reeder, D.; Ross, I.; Savin, A.; Smith, W. H.; Swanson, J.; Weinberg, M.

    2011-01-01

    Measurements of primary charged hadron multiplicity distributions are presented for non-single-diffractive events in proton-proton collisions at centre-of-mass energies of sqrt {s} = 0.9 , 2.36, and 7 TeV, in five pseudorapidity ranges from | η| < 0 .5 to | η| < 2 .4. The data were collected with the minimum-bias trigger of the CMS experiment during the LHC commissioning runs in 2009 and the 7 TeV run in 2010. The multiplicity distribution at sqrt {s} = 0.9{text{TeV}} is in agreement with previous measurements. At higher energies the increase of the mean multiplicity with sqrt {s} is underestimated by most event generators. The average transverse momentum as a function of the multiplicity is also presented. The measurement of higher-order moments of the multiplicity distribution confirms the violation of Koba-Nielsen-Olesen scaling that has been observed at lower energies.

  8. NGEE Arctic Tram: Photographs over Low- and High-Centered Polygon Vegetation Communities, Barrow, Alaska, 2014-2017

    DOE Data Explorer

    J. Bryan Curtis; Margaret Torn

    2017-10-05

    This dataset provides a digital image of each measurement position stop for every run of the Tram. There are 137 stops per run and there were sometimes multiple runs per day. This first version provides 12,714 images (*.jpg) collected over 28 days in 2015. Images for the other years will be added.

  9. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  10. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  11. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  12. Automated acoustic localization and call association for vocalizing humpback whales on the Navy's Pacific Missile Range Facility.

    PubMed

    Helble, Tyler A; Ierley, Glenn R; D'Spain, Gerald L; Martin, Stephen W

    2015-01-01

    Time difference of arrival (TDOA) methods for acoustically localizing multiple marine mammals have been applied to recorded data from the Navy's Pacific Missile Range Facility in order to localize and track humpback whales. Modifications to established methods were necessary in order to simultaneously track multiple animals on the range faster than real-time and in a fully automated way, while minimizing the number of incorrect localizations. The resulting algorithms were run with no human intervention at computational speeds faster than the data recording speed on over forty days of acoustic recordings from the range, spanning multiple years. Spatial localizations based on correlating sequences of units originating from within the range produce estimates having a standard deviation typically 10 m or less (due primarily to TDOA measurement errors), and a bias of 20 m or less (due primarily to sound speed mismatch). An automated method for associating units to individual whales is presented, enabling automated humpback song analyses to be performed.

  13. Effects of acute voluntary loaded wheel running on BDNF expression in the rat hippocampus.

    PubMed

    Lee, Minchul; Soya, Hideaki

    2017-12-31

    Voluntary loaded wheel running involves the use of a load during a voluntary running activity. A muscle-strength or power-type activity performed at a relatively high intensity and a short duration may cause fewer apparent metabolic adaptations but may still elicit muscle fiber hypertrophy. This study aimed to determine the effects of acute voluntary wheel running with an additional load on brain-derived neurotrophic factor (BDNF) expression in the rat hippocampus. Ten-week old male Wistar rats were assigned randomly to a (1) sedentary (Control) group; (2) voluntary exercise with no load (No-load) group; or (3) voluntary exercise with an additional load (Load) group for 1-week (acute period). The expression of BDNF genes was quantified by real-time PCR. The average distance levels were not significantly different in the No-load and Load groups. However, the average work levels significantly increased in the Load group. The relative soleus weights were greater in the No-load group. Furthermore, loaded wheel running up-regulated the BDNF mRNA level compared with that in the Control group. The BDNF mRNA levels showed a positive correlation with workload levels (r=0.75), suggesting that the availability of multiple workload levels contributes to the BDNF-related benefits of loaded wheel running noted in this study. This novel approach yielded the first set of findings showing that acute voluntary loaded wheel running, which causes muscular adaptation, enhanced BDNF expression, suggesting a possible role of high-intensity short-term exercise in hippocampal BDNF activity. ©2017 The Korean Society for Exercise Nutrition

  14. Comparison of energy expenditure to walk or run a mile in adult normal weight and overweight men and women.

    PubMed

    Loftin, Mark; Waddell, Dwight E; Robinson, James H; Owens, Scott G

    2010-10-01

    We compared the energy expenditure to walk or run a mile in adult normal weight walkers (NWW), overweight walkers (OW), and marathon runners (MR). The sample consisted of 19 NWW, 11 OW, and 20 MR adults. Energy expenditure was measured at preferred walking speed (NWW and OW) and running speed of a recently completed marathon. Body composition was assessed via dual-energy x-ray absorptiometry. Analysis of variance was used to compare groups with the Scheffe's procedure used for post hoc analysis. Multiple regression analysis was used to predict energy expenditure. Results that indicated OW exhibited significantly higher (p < 0.05) mass and fat weight than NWW or MR. Similar values were found between NWW and MR. Absolute energy expenditure to walk or run a mile was similar between groups (NWW 93.9 ± 15.0, OW 98.4 ± 29.9, MR 99.3 ± 10.8 kcal); however, significant differences were noted when energy expenditure was expressed relative to mass (MR > NWW > OW). When energy expenditure was expressed per kilogram of fat-free mass, similar values were found across groups. Multiple regression analysis yielded mass and gender as significant predictors of energy expenditure (R = 0.795, SEE = 10.9 kcal). We suggest that walking is an excellent physical activity for energy expenditure in overweight individuals that are capable of walking without predisposed conditions such as osteoarthritis or cardiovascular risk factors. Moreover, from a practical perspective, our regression equation (kcal = mass (kg) × 0.789 - gender (men = 1, women = 2) × 7.634 + 51.109) allows for the prediction of energy expenditure for a given distance (mile) rather than predicting energy expenditure for a given time (minutes).

  15. Influence of ABO blood group on sports performance.

    PubMed

    Lippi, Giuseppe; Gandini, Giorgio; Salvagno, Gian Luca; Skafidas, Spyros; Festa, Luca; Danese, Elisa; Montagnana, Martina; Sanchis-Gomar, Fabian; Tarperi, Cantor; Schena, Federico

    2017-06-01

    Despite being a recessive trait, the O blood group is the most frequent worldwide among the ABO blood types. Since running performance has been recognized as a major driver of evolutionary advantage in humans, we planned a study to investigate whether the ABO blood group may have an influence on endurance running performance in middle-aged recreational athletes. The study population consisted of 52 recreational, middle-aged, Caucasian athletes (mean age: 49±13 years, body mass index, 23.4±2.3 kg/m 2 ), regularly engaged in endurance activity. The athletes participated to a scientific event called "Run for Science" (R4S), entailing the completion of a 21.1 km (half-marathon) run under competing conditions. The ABO blood type status of the participants was provided by the local Service of Transfusion Medicine. In univariate analysis, running performance was significantly associated with age and weekly training, but not with body mass index. In multiple linear regression analysis, age and weekly training remained significantly associated with running performance. The ABO blood group status was also found to be independently associated with running time, with O blood type athletes performing better than those with non-O blood groups. Overall, age, weekly training and O blood group type explained 62.2% of the total variance of running performance (age, 41.6%; training regimen, 10.5%; ABO blood group, 10.1%). The results of our study show that recreational athletes with O blood group have better endurance performance compared to those with non-O blood group types. This finding may provide additional support to the putative evolutionary advantages of carrying the O blood group.

  16. Investigating the Use of the Intel Xeon Phi for Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Sherman, Keegan; Gilfoyle, Gerard

    2014-09-01

    The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.

  17. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  18. Reduction of User Interaction by Autonomy

    NASA Technical Reports Server (NTRS)

    Morfopoulos, Arin; McHenry, Michael; Matthies, Larry

    2006-01-01

    This paper describes experiments that quantify the improvement that autonomous behaviors enable in the amount of user interaction required to navigate a robot in urban environments. Many papers have discussed various ways to measure the absolute level of autonomy of a system; we measured the relative improvement of autonomous behaviors over teleoperation across multiple traverses of the same course. We performed four runs each on an 'easy' course and a 'hard' course, where half the runs were teleoperated and half used more autonomous behaviors. Statistics show 40-70% reductions in the amount of time the user interacts with the control station; however, with the behaviors tested, user attention remained on the control station even when he was not interacting. Reducing the need for attention will require better obstacle detection and avoidance and better absolute position estimation.

  19. GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaee, H.

    1982-05-01

    An exec has been written and placed on the PEP group's public disk to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.). It provides the option of running any of the above programs in either interactive or batch mode.more » In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less

  20. GO, an exec for running the programs: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT and TURTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaee, H.

    1982-05-01

    An exec has been written and placed on the PEP group's public disk (PUBRL 192) to facilitate the use of several PEP related computer programs available on VM. The exec's program list currently includes: CELL, COLLIDER, MAGIC, PATRICIA, PETROS, TRANSPORT, and TURTLE. In addition, provisions have been made to allow addition of new programs to this list as they become available. The GO exec is directly callable from inside the Wylbur editor (in fact, currently this is the only way to use the GO exec.) It provides the option of running any of the above programs in either interactive ormore » batch mode. In the batch mode, the GO exec sends the data in the Wylbur active file along with the information required to run the job to the batch monitor (BMON, a virtual machine that schedules and controls execution of batch jobs). This enables the user to proceed with other VM activities at his/her terminal while the job executes, thus making it of particular interest to the users with jobs requiring much CPU time to execute and/or those wishing to run multiple jobs independently. In the interactive mode, useful for small jobs requiring less CPU time, the job is executed by the user's own Virtual Machine using the data in the active file as input. At the termination of an interactive job, the GO exec facilitates examination of the output by placing it in the Wylbur active file.« less

  1. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  2. CHROMA: consensus-based colouring of multiple alignments for publication.

    PubMed

    Goodstadt, L; Ponting, C P

    2001-09-01

    CHROMA annotates multiple protein sequence alignments by consensus to produce formatted and coloured text suitable for incorporation into other documents for publication. The package is designed to be flexible and reliable, and has a simple-to-use graphical user interface running under Microsoft Windows. Both the executables and source code for CHROMA running under Windows and Linux (portable command-line only) are freely available at http://www.lg.ndirect.co.uk/chroma. Software enquiries should be directed to CHROMA@lg.ndirect.co.uk.

  3. The Chandra Monitoring System

    NASA Astrophysics Data System (ADS)

    Wolk, S. J.; Petreshock, J. G.; Allen, P.; Bartholowmew, R. T.; Isobe, T.; Cresitello-Dittmar, M.; Dewey, D.

    The NASA Great Observatory Chandra was launched July 23, 1999 aboard the space shuttle Columbia. The Chandra Science Center (CXC) runs a monitoring and trends analysis program to maximize the science return from this mission. At the time of the launch, the monitoring portion of this system was in place. The system is a collection of multiple threads and programming methodologies acting cohesively. Real-time data are passed to the CXC. Our real-time tool, ACORN (A Comprehensive object-ORiented Necessity), performs limit checking of performance related hardware. Chandra is in ground contact less than 3 hours a day, so the bulk of the monitoring must take place on data dumped by the spacecraft. To do this, we have written several tools which run off of the CXC data system pipelines. MTA_MONITOR_STATIC, limit checks FITS files containing hardware data. MTA_EVENT_MON and MTA_GRAT_MON create quick look data for the focal place instruments and the transmission gratings. When instruments violate their operational limits, the responsible scientists are notified by email and problem tracking is initiated. Output from all these codes is distributed to CXC scientists via HTML interface.

  4. The Relationship Between Soldier Performance on the Two-Mile Run and the 20-m Shuttle Run Test.

    PubMed

    Canino, Maria C; Cohen, Bruce S; Redmond, Jan E; Sharp, Marilyn A; Zambraski, Edward J; Foulis, Stephen A

    2018-05-01

    The 20-m shuttle run test (MSRT) is a common field test used to measure aerobic fitness in controlled environments. The U.S. Army currently assesses aerobic fitness with the two-mile run (TMR), but external factors may impact test performance. The aim of this study is to examine the relationship between the Army Physical Fitness Test TMR performance and the MSRT in military personnel. A group of 531 (403 males and 128 females) active duty soldiers (age: 24.0 ± 4.1 years) performed the MSRT in an indoor facility. Heart rate was monitored for the duration of the test. Post-heart rate and age-predicted maximal heart rate were utilized to determine near-maximal performance on the MSRT. The soldiers provided their most recent Army Physical Fitness Test TMR time (min). A Pearson correlation and multiple linear regression analyses were performed to examine the relationship between TMR time (min) and MSRT score (total number of shuttles completed). The study was approved by the Human Use Review Committee at the U.S. Army Research Institute of Environmental Medicine, Natick, Massachusetts. A significant, negative correlation exists between TMR time and MSRT score (r = -0.75, p < 0.001). Sex and MSRT score significantly predicted TMR time (adjusted R2 = 0.65, standard error of estimate = 0.97, p < 0.001) with a 95% ratio limits of agreement of ±12.6%. The resulting equation is: TMR = 17.736-2.464 × (sex) - 0.050 × (MSRT) - 0.026 × (MSRT × sex) for predicted TMR time. Males equal zero, females equal one, and MSRT score is the total number of shuttles completed. The MSRT is a strong predictor of the TMR and should be considered as a diagnostic tool when assessing aerobic fitness in active duty soldiers.

  5. StatsDB: platform-agnostic storage and understanding of next generation sequencing run metrics

    PubMed Central

    Ramirez-Gonzalez, Ricardo H.; Leggett, Richard M.; Waite, Darren; Thanki, Anil; Drou, Nizar; Caccamo, Mario; Davey, Robert

    2014-01-01

    Modern sequencing platforms generate enormous quantities of data in ever-decreasing amounts of time. Additionally, techniques such as multiplex sequencing allow one run to contain hundreds of different samples. With such data comes a significant challenge to understand its quality and to understand how the quality and yield are changing across instruments and over time. As well as the desire to understand historical data, sequencing centres often have a duty to provide clear summaries of individual run performance to collaborators or customers. We present StatsDB, an open-source software package for storage and analysis of next generation sequencing run metrics. The system has been designed for incorporation into a primary analysis pipeline, either at the programmatic level or via integration into existing user interfaces. Statistics are stored in an SQL database and APIs provide the ability to store and access the data while abstracting the underlying database design. This abstraction allows simpler, wider querying across multiple fields than is possible by the manual steps and calculation required to dissect individual reports, e.g. ”provide metrics about nucleotide bias in libraries using adaptor barcode X, across all runs on sequencer A, within the last month”. The software is supplied with modules for storage of statistics from FastQC, a commonly used tool for analysis of sequence reads, but the open nature of the database schema means it can be easily adapted to other tools. Currently at The Genome Analysis Centre (TGAC), reports are accessed through our LIMS system or through a standalone GUI tool, but the API and supplied examples make it easy to develop custom reports and to interface with other packages. PMID:24627795

  6. artdaq: DAQ software development made simple

    NASA Astrophysics Data System (ADS)

    Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron

    2017-10-01

    For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.

  7. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  8. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  9. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  10. Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-12-01

    The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.

  11. Online Meta-data Collection and Monitoring Framework for the STAR Experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.; Betts, W.; Van Buren, G.

    2012-12-01

    The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data streams. In this paper we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Meta-data Collection, Monitoring, Online QA and several Run-Time and Data Acquisition system components in a very efficient manner. The very nature of the reliable message bus suggests parallel usage of multiple independent storage mechanisms for our meta-data. We describe our experience with a robust data-taking setup employing MySQL- and HyperTable-based archivers for meta-data processing. In addition, MIRA has an AJAX-enabled web GUI, which allows real-time visualisation of online process flow and detector subsystem states, and doubles as a sophisticated alarm system when combined with complex event processing engines like Esper, Borealis or Cayuga. The performance data and our planned path forward are based on our experience during the 2011-2012 running of STAR.

  12. Accelerating Demand Paging for Local and Remote Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes a new algorithm that improves the performance of application-controlled demand paging for the out-of-core visualization of data sets that are on either local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The new algorithm can be applied to many different visualization algorithms since application-controlled demand paging is not specific to any visualization algorithm. The paper includes measurements that show that the new multi-threaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by up to 60%. Visualization runs using data from remote disk ran about as fast as ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  13. On the Rapid Computation of Various Polylogarithmic Constants

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Borwein, Peter; Plouffe, Simon

    1996-01-01

    We give algorithms for the computation of the d-th digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log(2) or pi on a modest workstation in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of pi, the billionth hexadecimal digits of pi-squared, log(2) and log-squared(2), and the ten billionth decimal digit of log(9/10). These calculations rest on the observation that very special types of identities exist for certain numbers like pi, pi-squared, log(2) and log-squared(2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we derive in this work appear to be new, for example a critical identity for pi.

  14. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  15. Kinematically redundant arm formulations for coordinated multiple arm implementations

    NASA Technical Reports Server (NTRS)

    Bailey, Robert W.; Quiocho, Leslie J.; Cleghorn, Timothy F.

    1990-01-01

    Although control laws for kinematically redundant robotic arms were presented as early as 1969, redundant arms have only recently become recognized as viable solutions to limitations inherent to kinematically sufficient arms. The advantages of run-time control optimization and arm reconfiguration are becoming increasingly attractive as the complexity and criticality of robotic systems continues to progress. A generalized control law for a spatial arm with 7 or more degrees of freedom (DOF) based on Whitney's resolved rate formulation is given. Results from a simulation implementation utilizing this control law are presented. Furthermore, results from a two arm simulation are presented to demonstrate the coordinated control of multiple arms using this formulation.

  16. Constructing Cost-Effective and Targetable ICS Honeypots Suited for Production Networks

    DTIC Science & Technology

    2015-03-26

    introducing Honeyd+ has a marginal impact on performance. Notable findings are that the Raspberry Pi is the preferred hosting platform for the EtherNet/IP... Raspberry Pi or Gumstix, which is a low-cost approach to replicating multiple decoys. One hidden drawback to low- interaction honeypots is the extensive time...EtherNet/IP industrial protocol. Honeyd+ is hosted on a low-cost computing platform ( Raspberry Pi running Raspbian, approximately $50) and a high-cost

  17. Evaluation of a low-end architecture for collaborative software development, remote observing, and data analysis from multiple sites

    NASA Astrophysics Data System (ADS)

    Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold

    2000-06-01

    The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.

  18. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.

  19. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  20. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  1. Validity of PALMS GPS scoring of active and passive travel compared with SenseCam.

    PubMed

    Carlson, Jordan A; Jankowska, Marta M; Meseck, Kristin; Godbole, Suneeta; Natarajan, Loki; Raab, Fredric; Demchak, Barry; Patrick, Kevin; Kerr, Jacqueline

    2015-03-01

    The objective of this study is to assess validity of the personal activity location measurement system (PALMS) for deriving time spent walking/running, bicycling, and in vehicle, using SenseCam (Microsoft, Redmond, WA) as the comparison. Forty adult cyclists wore a Qstarz BT-Q1000XT GPS data logger (Qstarz International Co., Taipei, Taiwan) and SenseCam (camera worn around the neck capturing multiple images every minute) for a mean time of 4 d. PALMS used distance and speed between global positioning system (GPS) points to classify whether each minute was part of a trip (yes/no), and if so, the trip mode (walking/running, bicycling, or in vehicle). SenseCam images were annotated to create the same classifications (i.e., trip yes/no and mode). Contingency tables (2 × 2) and confusion matrices were calculated at the minute level for PALMS versus SenseCam classifications. Mixed-effects linear regression models estimated agreement (mean differences and intraclass correlation coefficients) between PALMS and SenseCam with regard to minutes/day in each mode. Minute-level sensitivity, specificity, and negative predictive value were ≥88%, and positive predictive value was ≥75% for non-mode-specific trip detection. Seventy-two percent to 80% of outdoor walking/running minutes, 73% of bicycling minutes, and 74%-76% of in-vehicle minutes were correctly classified by PALMS. For minutes per day, PALMS had a mean bias (i.e., amount of over or under estimation) of 2.4-3.1 min (11%-15%) for walking/running, 2.3-2.9 min (7%-9%) for bicycling, and 4.3-5 min (15%-17%) for vehicle time. Intraclass correlation coefficients were ≥0.80 for all modes. PALMS has validity for processing GPS data to objectively measure time spent walking/running, bicycling, and in vehicle in population studies. Assessing travel patterns is one of many valuable applications of GPS in physical activity research that can improve our understanding of the determinants and health outcomes of active transportation as well as its effect on physical activity.

  2. Running of the scalar spectral index in bouncing cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehners, Jean-Luc; Wilson-Ewing, Edward, E-mail: jean-luc.lehners@aei.mpg.de, E-mail: wilson-ewing@aei.mpg.de

    We calculate the running of the scalar index in the ekpyrotic and matter bounce cosmological scenarios, and find that it is typically negative for ekpyrotic models, while it is typically positive for realizations of the matter bounce where multiple fields are present. This can be compared to inflation, where the observationally preferred models typically predict a negative running. The magnitude of the running is expected to be between 10{sup −4} and up to 10{sup −2}, leading in some cases to interesting expectations for near-future observations.

  3. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.

  4. Correlates of adherence to a telephone-based multiple health behavior change cancer preventive intervention for teens: the Healthy for Life Program (HELP).

    PubMed

    Mays, Darren; Peshkin, Beth N; Sharff, McKane E; Walker, Leslie R; Abraham, Anisha A; Hawkins, Kirsten B; Tercyak, Kenneth P

    2012-02-01

    This study examined factors associated with teens' adherence to a multiple health behavior cancer preventive intervention. Analyses identified predictors of trial enrollment, run-in completion, and adherence (intervention initiation, number of sessions completed). Of 104 teens screened, 73% (n = 76) were trial eligible. White teens were more likely to enroll than non-Whites (χ(2)[1] df = 4.49, p = .04). Among enrolled teens, 76% (n = 50) completed the run-in; there were no differences between run-in completers and noncompleters. A majority of run-in completers (70%, n = 35) initiated the intervention, though teens who initiated the intervention were significantly younger than those who did not (p < .05). The mean number of sessions completed was 5.7 (SD = 2.6; maximum = 8). After adjusting for age, teens with poorer session engagement (e.g., less cooperative) completed fewer sessions (B = -1.97, p = .003, R (2) = .24). Implications for adolescent cancer prevention research are discussed.

  5. Static analysis of the hull plate using the finite element method

    NASA Astrophysics Data System (ADS)

    Ion, A.

    2015-11-01

    This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.

  6. Mobilization of circulating progenitor cells in multiple myeloma during VCAD therapy with or without rhG-CSF.

    PubMed

    Majolino, I; Marcenò, R; Buscemi, F; Scimè, R; Vasta, S; Indovina, A; Pampinella, M; Catania, P; Santoro, A

    1995-01-01

    Circulating progenitor cells (CPC), when infused in large numbers, rapidly repopulate the marrow after myeloablation with high-dose therapy. In multiple myeloma (MM), as in other disorders, different chemotherapy regimens, including single-as well as multiple-agent chemotherapy, with or without hemopoietic growth factors, have been proposed to mobilize these progenitor cells into the blood. Here we report our experience with a drug combination called VCAD and compare the results to those obtained by adding rhG-CSF to the same combination. Fourteen MM patients were given one course of VCAD, a chemotherapy association of vincristine 2 mg, cyclophosphamide 4 x 0.5 g/m2, adriamycin 2 x 50 mg/m2 and dexamethasone 4 x 40 mg, before undergoing apheresis to collect CPC for autografting. Seven also received rhG-CSF (filgrastim) 5 mcg/kg/day over the period of apheresis. These latter were allocated to rhG-CSF treatment sequentially from the time the drug became available for clinical use. Following VCAD-induced pancytopenia, CFU-GM peaked at a median of 853/mL (range 96-4352; 7.6 times basal level). RhG-CSF administration increased CFU-GM levels but not significantly. With rhG-CSF the CFU-GM peak was reached sooner, toxicity was reduced and granulocytopenia less protracted. Fewer aphereses were run in the rhG-CSF group, there were higher yields per single run, and patients began and completed their collection program more quickly. The VCAD association is able to mobilize CPC in patients with MM, and rhG-CSF is recommended as a fundamental part of the priming schedule.

  7. Study of a Fine Grained Threaded Framework Design

    NASA Astrophysics Data System (ADS)

    Jones, C. D.

    2012-12-01

    Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.

  8. Dtest Testing Software

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  9. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  10. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  11. Of Faeces and Sweat. How Much a Mouse is Willing to Run: Having a Hard Time Measuring Spontaneous Physical Activity in Different Mouse Sub-Strains.

    PubMed

    Coletti, Dario; Adamo, Sergio; Moresi, Viviana

    2017-02-24

    Invited Letter to the Editor. Physical activity has multiple beneficial effects in the physiology and pathology of the organism. In particular, we and other groups have shown that running counteracts cancer cachexia in both humans and rodents. The latter are prone to exercise in wheel-equipped cages even at advanced stages of cachexia. However, when we wanted to replicate the experimental model routinely used at the University of Rome in a different laboratory (i.e. at Paris 6 University), we had to struggle with puzzling results due to unpredicted mouse behavior. Here we report the experience and offer the explanation underlying these apparently irreproducible results. The original data are currently used for teaching purposes in undergraduate student classes of biological sciences.

  12. Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, W; Paddack, E; Aceves, S

    2001-12-27

    We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less

  13. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  14. Development of a liquid chromatography-multiple reaction monitoring procedure for concurrent verification of exposure to different forms of mustard agents.

    PubMed

    Yeo, Thong-Hiang; Ho, Mer-Lin; Loke, Weng-Keong

    2008-01-01

    A novel liquid chromatography-multiple reaction monitoring (LC-MRM) procedure has been developed for retrospective diagnosis of exposure to different forms of mustard agents. This concise method is able to validate prior exposure to nitrogen mustards (HN-1, HN-2, and HN-3) or sulfur mustard (HD) in a single run, which significantly reduces analysis time compared to separate runs to screen for different mustards' biomarkers based on tandem mass spectrometry. Belonging to one of the more toxic classes of chemical warfare agents, these potent vesicants bind covalently to the cysteine-34 residue of human serum albumin. This results in the formation of stable adducts whose identities were confirmed by a de novo sequencing bioinformatics software package. Our developed technique tracks these albumin-derived adduct biomarkers in blood samples which persist in vitro following exposure, enabling a detection limit of 200 nM of HN-1, 100 nM of HN-2, 200 nM of HN-3, or 50 nM of HD in human blood. The CWA-adducts formed in blood samples can be conveniently and sensitively analyzed by this MRM technique to allow rapid and reliable screening.

  15. LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Thirumalainambi, Rajkumar

    2006-01-01

    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.

  16. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  17. Sample preparation and liquid chromatography-tandem mass spectrometry for multiple steroids in mammalian and avian circulation.

    PubMed

    Koren, Lee; Ng, Ella S M; Soma, Kiran K; Wynne-Edwards, Katherine E

    2012-01-01

    Blood samples from wild mammals and birds are often limited in volume, allowing researchers to quantify only one or two steroids from a single sample by immunoassays. In addition, wildlife serum or plasma samples are often lipemic, necessitating stringent sample preparation. Here, we validated sample preparation for simultaneous liquid chromatography--tandem mass spectrometry (LC-MS/MS) quantitation of cortisol, corticosterone, 11-deoxycortisol, dehydroepiandrosterone (DHEA), 17β-estradiol, progesterone, 17α-hydroxyprogesterone and testosterone from diverse mammalian (7 species) and avian (5 species) samples. Using 100 µL of serum or plasma, we quantified (signal-to-noise (S/N) ratio ≥ 10) 4-7 steroids depending on the species and sample, without derivatization. Steroids were extracted from serum or plasma using automated solid-phase extraction where samples were loaded onto C18 columns, washed with water and hexane, and then eluted with ethyl acetate. Quantitation by LC-MS/MS was done in positive ion, multiple reaction-monitoring (MRM) mode with an atmospheric pressure chemical ionization (APCI) source and heated nebulizer (500°C). Deuterated steroids served as internal standards and run time was 15 minutes. Extraction recoveries were 87-101% for the 8 analytes, and all intra- and inter-run CVs were ≤ 8.25%. This quantitation method yields good recoveries with variable lipid-content samples, avoids antibody cross-reactivity issues, and delivers results for multiple steroids. Thus, this method can enrich datasets by providing simultaneous quantitation of multiple steroids, and allow researchers to reimagine the hypotheses that could be tested with their volume-limited, lipemic, wildlife samples.

  18. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  19. Determinants of ambulance response time: A study in Sabah, Malaysia

    NASA Astrophysics Data System (ADS)

    Chin, Su Na; Cheah, Phee Kheng; Arifin, Muhamad Yaakub; Wong, Boh Leng; Omar, Zaturrawiah; Yassin, Fouziah Md; Gabda, Darmesah

    2017-04-01

    Ambulance response time (ART) is one of the standard key performance indicators (KPI) in measuring the emergency medical services (EMS) delivery performances. When the mean time of ART of EMS system reaches the KPI target, it shows that the EMS system performs well. This paper considers the determinants of ART, using data sampled from 967 ambulance runs in a government hospital in Sabah. Multiple regression analysis with backward elimination was proposed for the identification of significant factors. Amongst the underlying factors, travel distance, age of patients, type of treatment and peak hours were identified to be significantly affecting ART. Identifying factors that influence ART helps the development of strategic improvement planning for reducing the ART.

  20. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  1. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  2. Centrifuge-Simulated Suborbital Spaceflight in a Subject with Cardiac Malformation.

    PubMed

    Blue, Rebecca S; Blacher, Eric; Castleberry, Tarah L; Vanderploeg, James M

    2015-11-01

    Commercial spaceflight participants (SFPs) will introduce new medical challenges to the aerospace community, with unique medical conditions never before exposed to the space environment. This is a case report regarding the response of a subject with multiple cardiac malformations, including aortic insufficiency, pulmonary atresia, pulmonary valve replacement, ventricular septal defect (post-repair), and pulmonary artery stenosis (post-dilation), to centrifuge acceleration simulating suborbital flight. A 23-yr-old man with a history of multiple congenital cardiac malformations underwent seven centrifuge runs over 2 d. Day 1 consisted of two +G(z) runs (peak = +3.5 G(z), run 2) and two +G(x) runs (peak = +6.0 G(x), run 4). Day 2 consisted of three runs approximating suborbital spaceflight profiles (combined +G(x) and +G(z)). Data collected included blood pressure, electrocardiogram, pulse oximetry, neurovestibular exams, and post-run questionnaires regarding motion sickness, disorientation, greyout, and other symptoms. Despite the subject's significant medical history, he tolerated the acceleration profiles well and demonstrated no significant abnormal physiological responses. Potential risks to SFPs with aortic insufficiency, artificial heart valves, or valvular insufficiency include lower +G(z) tolerance, earlier symptom onset, and ineffective mitigation strategies such as anti-G straining maneuvers. There are no prior studies of prolonged accelerations approximating spaceflight in such individuals. This case demonstrates tolerance of acceleration profiles in an otherwise young and healthy individual with significant cardiac malformations, suggesting that such conditions may not necessarily preclude participation in commercial spaceflight.

  3. Molecular t-matrices for Low-Energy Electron Diffraction (TMOL v1.1)

    NASA Astrophysics Data System (ADS)

    Blanco-Rey, Maria; de Andres, Pedro; Held, Georg; King, David A.

    2004-08-01

    We describe a FORTRAN-90 program that computes scattering t-matrices for a molecule. These can be used in a Low-Energy Electron Diffraction program to solve the molecular structural problem very efficiently. The intramolecular multiple scattering is computed within a Dyson-like approach, using free space Green propagators in a basis of spherical waves. The advantage of this approach is related to exploiting the chemical identity of the molecule, and to the simplicity to translate and rotate these t-matrices without performing a new multiple-scattering calculation for each configuration. FORTRAN-90 routines for rotating the resulting t-matrices using Wigner matrices are also provided. Program summaryTitle of program: TMOL Catalogue number: ADUF Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Computers: Alpha ev6-21264 (700 MHz) and Pentium-IV. Operating systems: Digital UNIX V5.0 and Linux (Red Hat 8.0). Programming language: FORTRAN-90/95 (Compaq True64 compiler, and Intel Fortran Compiler 7.0 for Linux). High-speed storage required for the test run: minimum 64 Mbytes, it can grow to more depending on the system considered. Disk storage required: None. No. of bits in a word: 64 and 32. No. of lines in distributed program, including test data etc.: 5404 No. of bytes in distributed program, including test data etc.: 59 856 Distribution format: tar.gz Nature of problem: We describe the FORTRAN-90 program TMOL (v1.1) for the computation of non-diagonal scattering t-matrices for molecules or any other poly-atomic sub-unit of surface structures. These matrices can be used in an standard Low-Energy Electron Diffraction program, such as LEED90 or CLEED. Method of solution: A general non-diagonal t-matrix is assumed for the atoms or more general scatterers forming the molecule. The molecular t-matrix is solved adding the possible intramolecular multiple scattering events using Green's propagator formalism. The resulting t-matrix is referred to the mass centre of the molecule and can be easily translated with these propagators and rotated applying Wigner matrices. Typical running time: Calculating the t-matrix for a single energy takes a few seconds. Time depends on the maximum angular momentum quantum number, lmax, and the number of scatterers in the molecule, N. Running time scales as lmax6 and N3. References: [1] S. Andersson, J.B. Pendry, J. Phys. C: Solid St. Phys. 13 (1980) 3547. [2] A. Gonis, W.H. Butler, Multiple Scattering in Solids, Springer-Verlag, Berlin/New York, 2000.

  4. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  5. Multiple Robots Localization Via Data Sharing

    DTIC Science & Technology

    2015-09-01

    multiple humans, each with specialized skills complementing each other, work to create the solution. Hence, there is a motivation to think in terms of...pygame.Color(255,255,255) COLORBLACK = pygame.Color(0,0,0) F. AUTOMATE.PY The automate.py file is a helper file to assist in running multiple simulation

  6. WE-C-217BCD-08: Rapid Monte Carlo Simulations of DQE(f) of Scintillator-Based Detectors.

    PubMed

    Star-Lack, J; Abel, E; Constantin, D; Fahrig, R; Sun, M

    2012-06-01

    Monte Carlo simulations of DQE(f) can greatly aid in the design of scintillator-based detectors by helping optimize key parameters including scintillator material and thickness, pixel size, surface finish, and septa reflectivity. However, the additional optical transport significantly increases simulation times, necessitating a large number of parallel processors to adequately explore the parameter space. To address this limitation, we have optimized the DQE(f) algorithm, reducing simulation times per design iteration to 10 minutes on a single CPU. DQE(f) is proportional to the ratio, MTF(f)̂2 /NPS(f). The LSF-MTF simulation uses a slanted line source and is rapidly performed with relatively few gammas launched. However, the conventional NPS simulation for standard radiation exposure levels requires the acquisition of multiple flood fields (nRun), each requiring billions of input gamma photons (nGamma), many of which will scintillate, thereby producing thousands of optical photons (nOpt) per deposited MeV. The resulting execution time is proportional to the product nRun x nGamma x nOpt. In this investigation, we revisit the theoretical derivation of DQE(f), and reveal significant computation time savings through the optimization of nRun, nGamma, and nOpt. Using GEANT4, we determine optimal values for these three variables for a GOS scintillator-amorphous silicon portal imager. Both isotropic and Mie optical scattering processes were modeled. Simulation results were validated against the literature. We found that, depending on the radiative and optical attenuation properties of the scintillator, the NPS can be accurately computed using values for nGamma below 1000, and values for nOpt below 500/MeV. nRun should remain above 200. Using these parameters, typical computation times for a complete NPS ranged from 2-10 minutes on a single CPU. The number of launched particles and corresponding execution times for a DQE simulation can be dramatically reduced allowing for accurate computation with modest computer hardware. NIHRO1 CA138426. Several authors work for Varian Medical Systems. © 2012 American Association of Physicists in Medicine.

  7. Development for SSV on a parallel processing system (PARAGON)

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.; Allmen, Mark; Carroll, Michael J.; Rich, Dan

    1995-12-01

    A goal of the surrogate semi-autonomous vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. This paper describes the process and tools used in porting UGV/SSV (unmanned ground vehicle) autonomous mobility and target recognition algorithms from a SISD (single instruction single data) processor architecture (i.e., a Sun SPARC workstation running C/UNIX) to a MIMD (multiple instruction multiple data) parallel processor architecture (i.e., PARAGON-a parallel set of i860 processors running C/UNIX). It discusses the gains in performance and the pitfalls of such a venture. It also examines the merits of this processor architecture (based on this conceptual prototyping effort) and programming paradigm to meet the final SSV demonstration requirements.

  8. Operational effectiveness of a Multiple Aquila Control System (MACS)

    NASA Technical Reports Server (NTRS)

    Brown, R. W.; Flynn, J. D.; Frey, M. R.

    1983-01-01

    The operational effectiveness of a multiple aquila control system (MACS) was examined under a variety of remotely piloted vehicle (RPV) mission configurations. The set of assumptions and inputs used to form the rules under which a computerized simulation of MACS was run is given. The characteristics that are to govern MACS operations include: the battlefield environment that generates the requests for RPV missions, operating time-lines of the RPV-peculiar equipment, maintenance requirements, and vulnerability to enemy fire. The number of RPV missions and the number of operation days are discussed. Command, control, and communication data rates are estimated by determining how many messages are passed and what information is necessary in them to support ground coordination between MACS sections.

  9. Memory interface simulator: A computer design aid

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  11. A portable pattern-based design technology co-optimization flow to reduce optical proximity correction run-time

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chieh; Li, Tsung-Han; Lin, Hung-Yu; Chen, Kao-Tun; Wu, Chun-Sheng; Lai, Ya-Chieh; Hurat, Philippe

    2018-03-01

    Along with process improvement and integrated circuit (IC) design complexity increased, failure rate caused by optical getting higher in the semiconductor manufacture. In order to enhance chip quality, optical proximity correction (OPC) plays an indispensable rule in the manufacture industry. However, OPC, includes model creation, correction, simulation and verification, is a bottleneck from design to manufacture due to the multiple iterations and advanced physical behavior description in math. Thus, this paper presented a pattern-based design technology co-optimization (PB-DTCO) flow in cooperation with OPC to find out patterns which will negatively affect the yield and fixed it automatically in advance to reduce the run-time in OPC operation. PB-DTCO flow can generate plenty of test patterns for model creation and yield gaining, classify candidate patterns systematically and furthermore build up bank includes pairs of match and optimization patterns quickly. Those banks can be used for hotspot fixing, layout optimization and also be referenced for the next technology node. Therefore, the combination of PB-DTCO flow with OPC not only benefits for reducing the time-to-market but also flexible and can be easily adapted to diversity OPC flow.

  12. PGCA: An algorithm to link protein groups created from MS/MS data

    PubMed Central

    Sasaki, Mayu; Hollander, Zsuzsanna; Smith, Derek; McManus, Bruce; McMaster, W. Robert; Ng, Raymond T.; Cohen Freue, Gabriela V.

    2017-01-01

    The quantitation of proteins using shotgun proteomics has gained popularity in the last decades, simplifying sample handling procedures, removing extensive protein separation steps and achieving a relatively high throughput readout. The process starts with the digestion of the protein mixture into peptides, which are then separated by liquid chromatography and sequenced by tandem mass spectrometry (MS/MS). At the end of the workflow, recovering the identity of the proteins originally present in the sample is often a difficult and ambiguous process, because more than one protein identifier may match a set of peptides identified from the MS/MS spectra. To address this identification problem, many MS/MS data processing software tools combine all plausible protein identifiers matching a common set of peptides into a protein group. However, this solution introduces new challenges in studies with multiple experimental runs, which can be characterized by three main factors: i) protein groups’ identifiers are local, i.e., they vary run to run, ii) the composition of each group may change across runs, and iii) the supporting evidence of proteins within each group may also change across runs. Since in general there is no conclusive evidence about the absence of proteins in the groups, protein groups need to be linked across different runs in subsequent statistical analyses. We propose an algorithm, called Protein Group Code Algorithm (PGCA), to link groups from multiple experimental runs by forming global protein groups from connected local groups. The algorithm is computationally inexpensive and enables the connection and analysis of lists of protein groups across runs needed in biomarkers studies. We illustrate the identification problem and the stability of the PGCA mapping using 65 iTRAQ experimental runs. Further, we use two biomarker studies to show how PGCA enables the discovery of relevant candidate protein group markers with similar but non-identical compositions in different runs. PMID:28562641

  13. An Authentication Protocol for Future Sensor Networks.

    PubMed

    Bilal, Muhammad; Kang, Shin-Gak

    2017-04-28

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols.

  14. An Authentication Protocol for Future Sensor Networks

    PubMed Central

    Bilal, Muhammad; Kang, Shin-Gak

    2017-01-01

    Authentication is one of the essential security services in Wireless Sensor Networks (WSNs) for ensuring secure data sessions. Sensor node authentication ensures the confidentiality and validity of data collected by the sensor node, whereas user authentication guarantees that only legitimate users can access the sensor data. In a mobile WSN, sensor and user nodes move across the network and exchange data with multiple nodes, thus experiencing the authentication process multiple times. The integration of WSNs with Internet of Things (IoT) brings forth a new kind of WSN architecture along with stricter security requirements; for instance, a sensor node or a user node may need to establish multiple concurrent secure data sessions. With concurrent data sessions, the frequency of the re-authentication process increases in proportion to the number of concurrent connections. Moreover, to establish multiple data sessions, it is essential that a protocol participant have the capability of running multiple instances of the protocol run, which makes the security issue even more challenging. The currently available authentication protocols were designed for the autonomous WSN and do not account for the above requirements. Hence, ensuring a lightweight and efficient authentication protocol has become more crucial. In this paper, we present a novel, lightweight and efficient key exchange and authentication protocol suite called the Secure Mobile Sensor Network (SMSN) Authentication Protocol. In the SMSN a mobile node goes through an initial authentication procedure and receives a re-authentication ticket from the base station. Later a mobile node can use this re-authentication ticket when establishing multiple data exchange sessions and/or when moving across the network. This scheme reduces the communication and computational complexity of the authentication process. We proved the strength of our protocol with rigorous security analysis (including formal analysis using the BAN-logic) and simulated the SMSN and previously proposed schemes in an automated protocol verifier tool. Finally, we compared the computational complexity and communication cost against well-known authentication protocols. PMID:28452937

  15. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    PubMed

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  16. Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk

    PubMed Central

    Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581

  17. New insight into the comparative power of quality-control rules that use control observations within a single analytical run.

    PubMed

    Parvin, C A

    1993-03-01

    The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.

  18. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  19. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  20. interThermalPhaseChangeFoam-A framework for two-phase flow simulations with thermally driven phase change

    NASA Astrophysics Data System (ADS)

    Nabil, Mahdi; Rattner, Alexander S.

    The volume-of-fluid (VOF) approach is a mature technique for simulating two-phase flows. However, VOF simulation of phase-change heat transfer is still in its infancy. Multiple closure formulations have been proposed in the literature, each suited to different applications. While these have enabled significant research advances, few implementations are publicly available, actively maintained, or inter-operable. Here, a VOF solver is presented (interThermalPhaseChangeFoam), which incorporates an extensible framework for phase-change heat transfer modeling, enabling simulation of diverse phenomena in a single environment. The solver employs object oriented OpenFOAM library features, including Run-Time-Type-Identification to enable rapid implementation and run-time selection of phase change and surface tension force models. The solver is packaged with multiple phase change and surface tension closure models, adapted and refined from earlier studies. This code has previously been applied to study wavy film condensation, Taylor flow evaporation, nucleate boiling, and dropwise condensation. Tutorial cases are provided for simulation of horizontal film condensation, smooth and wavy falling film condensation, nucleate boiling, and bubble condensation. Validation and grid sensitivity studies, interfacial transport models, effects of spurious currents from surface tension models, effects of artificial heat transfer due to numerical factors, and parallel scaling performance are described in detail in the Supplemental Material (see Appendix A). By incorporating the framework and demonstration cases into a single environment, users can rapidly apply the solver to study phase-change processes of interest.

  1. Impacts of conservation and human development policy across stakeholders and scales.

    PubMed

    Li, Cong; Zheng, Hua; Li, Shuzhuo; Chen, Xiaoshu; Li, Jie; Zeng, Weihong; Liang, Yicheng; Polasky, Stephen; Feldman, Marcus W; Ruckelshaus, Mary; Ouyang, Zhiyun; Daily, Gretchen C

    2015-06-16

    Ideally, both ecosystem service and human development policies should improve human well-being through the conservation of ecosystems that provide valuable services. However, program costs and benefits to multiple stakeholders, and how they change through time, are rarely carefully analyzed. We examine one of China's new ecosystem service protection and human development policies: the Relocation and Settlement Program of Southern Shaanxi Province (RSP), which pays households who opt voluntarily to resettle from mountainous areas. The RSP aims to reduce disaster risk, restore important ecosystem services, and improve human well-being. We use household surveys and biophysical data in an integrated economic cost-benefit analysis for multiple stakeholders. We project that the RSP will result in positive net benefits to the municipal government, and to cross-region and global beneficiaries over the long run along with environment improvement, including improved water quality, soil erosion control, and carbon sequestration. However, there are significant short-run relocation costs for local residents so that poor households may have difficulty participating because they lack the resources to pay the initial costs of relocation. Greater subsidies and subsequent supports after relocation are necessary to reduce the payback period of resettled households in the long run. Compensation from downstream beneficiaries for improved water and from carbon trades could be channeled into reducing relocation costs for the poor and sharing the burden of RSP implementation. The effectiveness of the RSP could also be greatly strengthened by early investment in developing human capital and environment-friendly jobs and establishing long-term mechanisms for securing program goals. These challenges and potential solutions pervade ecosystem service efforts globally.

  2. Retention time alignment of LC/MS data by a divide-and-conquer algorithm.

    PubMed

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  3. Accurate Sample Time Reconstruction of Inertial FIFO Data.

    PubMed

    Stieber, Sebastian; Dorsch, Rainer; Haubelt, Christian

    2017-12-13

    In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO) interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts-introduced by fabrication inaccuracies, temperature changes and wear-out effects-onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS) technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  4. A Mesoscale Total Dissolved Solids Quantity and Quality Study Integrating Responses of Multiple Biological Components in Small Stream Communities

    EPA Science Inventory

    A 42-day dosing test with ions comprising an excess TDS was run using mesocosms colonized with natural stream water fed continuously. In gridded gravel beds biota from microbes through macroinvertebrates are measured and interact in a manner realistic of stream riffle/run ecology...

  5. How Settings Change People: Applying Behavior Setting Theory to Consumer-Run Organizations

    ERIC Educational Resources Information Center

    Brown, Louis D.; Shepherd, Matthew D.; Wituk, Scott A.; Meissen, Greg

    2007-01-01

    Self-help initiatives stand as a classic context for organizational studies in community psychology. Behavior setting theory stands as a classic conception of organizations and the environment. This study explores both, applying behavior setting theory to consumer-run organizations (CROs). Analysis of multiple data sets from all CROs in Kansas…

  6. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...

  7. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on cropland and was originally designed to run simulations on a field-scale size. This study extended WEPS to run on multiple fields (grids) independently to cover a large region and to conduct an initial investigation to ass...

  8. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  9. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  10. drPACS: A Simple UNIX Execution Pipeline

    NASA Astrophysics Data System (ADS)

    Teuben, P.

    2011-07-01

    We describe a very simple yet flexible and effective pipeliner for UNIX commands. It creates a Makefile to define a set of serially dependent commands. The commands in the pipeline share a common set of parameters by which they can communicate. Commands must follow a simple convention to retrieve and store parameters. Pipeline parameters can optionally be made persistent across multiple runs of the pipeline. Tools were added to simplify running a large series of pipelines, which can then also be run in parallel.

  11. Protective effect of Panax ginseng in cisplatin-induced cachexia in rats.

    PubMed

    Lobina, Carla; Carai, Mauro A M; Loi, Barbara; Gessa, Gian Luigi; Riva, Antonella; Cabri, Walter; Petrangolini, Giovanna; Morazzoni, Paolo; Colombo, Giancarlo

    2014-05-01

    This study investigated the protective effect of a standardized extract of Panax ginseng on multiple cisplatin-induced 'sickness behaviors' (model of cancer-induced cachexia) in rats. Cisplatin was administered twice weekly (1-2 mg/kg, intraperitoneal) for 5 consecutive weeks. Panax ginseng extract (0, 25 and 50 mg/kg, intragastric) was administered daily over the 5-week period of cisplatin exposure. Malaise, bodyweight and temperature, pain sensitivity, and endurance running were recorded at baseline and at 5 weekly intervals. Treatment with cisplatin produced severe signs of malaise, marked loss of bodyweight, hypothermia, hyperalgesia and reduction in running time. Treatment with Panax ginseng extract completely prevented all cisplatin-induced alterations. These data indicate that treatment with Panax ginseng extract exerted a protective effect in a rat model of cachexia and suggest that Panax ginseng extract may be a therapeutic promising tool for supportive care in oncology.

  12. Gender difference and age-related changes in performance at the long-distance duathlon.

    PubMed

    Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver

    2013-02-01

    The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.

  13. Customisation of the exome data analysis pipeline using a combinatorial approach.

    PubMed

    Pattnaik, Swetansu; Vaidyanathan, Srividya; Pooja, Durgad G; Deepak, Sa; Panda, Binay

    2012-01-01

    The advent of next generation sequencing (NGS) technologies have revolutionised the way biologists produce, analyse and interpret data. Although NGS platforms provide a cost-effective way to discover genome-wide variants from a single experiment, variants discovered by NGS need follow up validation due to the high error rates associated with various sequencing chemistries. Recently, whole exome sequencing has been proposed as an affordable option compared to whole genome runs but it still requires follow up validation of all the novel exomic variants. Customarily, a consensus approach is used to overcome the systematic errors inherent to the sequencing technology, alignment and post alignment variant detection algorithms. However, the aforementioned approach warrants the use of multiple sequencing chemistry, multiple alignment tools, multiple variant callers which may not be viable in terms of time and money for individual investigators with limited informatics know-how. Biologists often lack the requisite training to deal with the huge amount of data produced by NGS runs and face difficulty in choosing from the list of freely available analytical tools for NGS data analysis. Hence, there is a need to customise the NGS data analysis pipeline to preferentially retain true variants by minimising the incidence of false positives and make the choice of right analytical tools easier. To this end, we have sampled different freely available tools used at the alignment and post alignment stage suggesting the use of the most suitable combination determined by a simple framework of pre-existing metrics to create significant datasets.

  14. Major and Minor League Baseball Hamstring Injuries: Epidemiologic Findings From the Major League Baseball Injury Surveillance System.

    PubMed

    Ahmad, Christopher S; Dick, Randall W; Snell, Edward; Kenney, Nick D; Curriero, Frank C; Pollack, Keshia; Albright, John P; Mandelbaum, Bert R

    2014-06-01

    Hamstring strains are a recognized cause of disability for athletes in many sports, but no study exists that reports the incidence and circumstances surrounding these injuries in professional baseball. Professional baseball players have a high incidence of hamstring strains, and these injuries are influenced by multiple factors including history of hamstring injury, time period within the season, and activity of base running. Descriptive epidemiologic study. For the 2011 season, injury data were prospectively collected for every Major League Baseball (MLB) major and minor league team and recorded in the MLB's Injury Surveillance System. Data collected for this study included date of injury, activity in which the player was engaged at the time of injury, and time loss. Injury rates were reported in injuries per athlete-exposure (A-E). Athlete-exposures were defined as the average number of players on a team who were participating in a game multiplied by the number of games. In the major leagues, 50 hamstring strains were reported for an injury rate (IR) of 0.7 per 1000 A-Es and averaged 24 days missed. In the minor leagues, 218 hamstring strains were reported for an IR of 0.7 per 1000 A-Es and averaged 27 days missed. Base running, specifically running to first base, was the top activity for sustaining a hamstring strain in both major and minor leagues, associated with almost two-thirds of hamstring strains. Approximately two-thirds of these injuries in both the major and minor leagues resulted in more than 7 days of time loss. Approximately 25% of these injuries kept the player out for 1 month or longer. History of a previous hamstring strain in the prior year, 2010, was found in 20% of the major league players and 8% of the minor league players. In the major leagues, the month of May had a statistically significant higher frequency of hamstring injuries than any other month in the season (P = .0153). Hamstring strains are a considerable cause of disability in professional baseball and are affected by history of hamstring strain, seasonal timing, and running to first base. © 2014 The Author(s).

  15. Fat max as an index of aerobic exercise performance in mice during uphill running

    PubMed Central

    Taniguchi, Hirokazu

    2018-01-01

    Endurance exercise performance has been used as a representative index in experimental animal models in the field of health sciences, exercise physiology, comparative physiology, food function or nutritional physiology. The objective of the present study was to evaluate the effectiveness of Fatmax (the exercise intensity that elicits maximal fat oxidation) as an additional index of endurance exercise performance that can be measured during running at submaximal exercise intensity in mice. We measured both Fatmax and Vo2 peak of trained ICR mice that voluntary exercised for 8 weeks and compared them with a sedentary group of mice at multiple inclinations of 20, 30, 40, and 50° on a treadmill. The Vo2 at Fatmax of the training group was significantly higher than that of the sedentary group at inclinations of 30 and 40° (P < 0.001). The running speed at Fatmax of the training group was significantly higher than that of the sedentary group at inclinations of 20, 30, and 40° (P < 0.05). Blood lactate levels sharply increased in the sedentary group (7.33 ± 2.58 mM) compared to the training group (3.13 ± 1.00 mM, P < 0.01) when running speeds exceeded the Fatmax of sedentary mice. Vo2 at Fatmax significantly correlated to Vo2 peak, running time to fatigue, and lactic acid level during running (P < 0.05) although the reproducibility of Vo2 peak was higher than that of Vo2 at Fatmax. In conclusion, Fatmax can be used as a functional assessment of the endurance exercise performance of mice during submaximal exercise intensity. PMID:29474428

  16. Whole blood coagulation and platelet activation in the athlete: a comparison of marathon, triathlon and long distance cycling.

    PubMed

    Hanke, Alexander A; Staib, A; Görlinger, K; Perrey, M; Dirkmann, D; Kienbaum, P

    2010-02-26

    Serious thrombembolic events occur in otherwise healthy marathon athletes during competition. We tested the hypothesis that during heavy endurance sports coagulation and platelets are activated depending on the type of endurance sport with respect to its running fraction. 68 healthy athletes participating in marathon (MAR, running 42 km, n = 24), triathlon (TRI, swimming 2.5 km + cycling 90 km + running 21 km, n = 22), and long distance cycling (CYC, 151 km, n = 22) were included in the study. Blood samples were taken before and immediately after completion of competition to perform rotational thrombelastometry. We assessed coagulation time (CT), maximum clot firmness (MCF) after intrinsically activation and fibrin polymerization (FIBTEM). Furthermore, platelet aggregation was tested after activation with ADP and thrombin activating peptide 6 (TRAP) by using multiple platelet function analyzer. Complete data sets were obtained in 58 athletes (MAR: n = 20, TRI: n = 19, CYC: n = 19). CT significantly decreased in all groups (MAR -9.9%, TRI -8.3%, CYC -7.4%) without differences between groups. In parallel, MCF (MAR +7.4%, TRI +6.1%, CYC +8.3%) and fibrin polymerization (MAR +14.7%, TRI +6.1%, CYC +8.3%) were significantly increased in all groups. However, platelets were only activated during MAR and TRI as indicated by increased AUC during TRAP-activation (MAR +15.8%) and increased AUC during ADP-activation in MAR (+50.3%) and TRI (+57.5%). While coagulation is activated during physical activity irrespective of type we observed significant platelet activation only during marathon and to a lesser extent during triathlon. We speculate that prolonged running may increase platelet activity, possibly, due to mechanical alteration. Thus, particularly prolonged running may increase the risk of thrombembolic incidents in running athletes.

  17. Improving overly manufacturing metrics through application of feedforward mask-bias

    NASA Astrophysics Data System (ADS)

    Joubert, Etienne; Pellegrini, Joseph C.; Misra, Manish; Sturtevant, John L.; Bernhard, John M.; Ong, Phu; Crawshaw, Nathan K.; Puchalski, Vern

    2003-06-01

    Traditional run-to-run controllers that rely on highly correlated historical events to forecast process corrections have been shown to provide substantial benefit over manual control in the case of a fab that is primarily manufacturing high volume, frequent running parts (i.e., DRAM, MPU, and similar operations). However, a limitation of the traditional controller emerges when it is applied to a fab whose work in process (WIP) is composed of primarily short-running, high part count products (typical of foundries and ASIC fabs). This limitation exists because there is a strong likelihood that each reticle has a unique set of process corrections different from other reticles at the same process layer. Further limitations exist when it is realized that each reticle is loaded and aligned differently on multiple exposure tools.A structural change in how the run-to-run controller manages the frequent reticle changes associated with the high part count environment has allowed for breakthrough performance to be achieved. This breakthrough was mad possible by the realization that; 1. Reticle sourced errors were highly stable over long periods of time, thus allowing them to be deconvolved from the day to day tool and process drifts. 2. Reticle sourced errors can be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams. In this paper, we show how to deconvolve the static (reticle) and dynamic (day to day tool and process) components from the overall error vector to better forecast feedback for existing products as well as how to compute or learn these values for new product introductions - or new tool startups. Manufacturing data will presented to support this discussion with some real world success stories.

  18. Ethanol consumption in mice: relationships with circadian period and entrainment.

    PubMed

    Trujillo, Jennifer L; Do, David T; Grahame, Nicholas J; Roberts, Amanda J; Gorman, Michael R

    2011-03-01

    A functional connection between the circadian timing system and alcohol consumption is suggested by multiple lines of converging evidence. Ethanol consumption perturbs physiological rhythms in hormone secretion, sleep, and body temperature; and conversely, genetic and environmental perturbations of the circadian system can alter alcohol intake. A fundamental property of the circadian pacemaker, the endogenous period of its cycle under free-running conditions, was previously shown to differ between selectively bred high- (HAP) and low- (LAP) alcohol preferring replicate 1 mice. To test whether there is a causal relationship between circadian period and ethanol intake, we induced experimental, rather than genetic, variations in free-running period. Male inbred C57Bl/6J mice and replicate 2 male and female HAP2 and LAP2 mice were entrained to light:dark cycles of 26 or 22 h or remained in a standard 24 h cycle. On discontinuation of the light:dark cycle, experimental animals exhibited longer and shorter free-running periods, respectively. Despite robust effects on circadian period and clear circadian rhythms in drinking, these manipulations failed to alter the daily ethanol intake of the inbred strain or selected lines. Likewise, driving the circadian system at long and short periods produced no change in alcohol intake. In contrast with replicate 1 HAP and LAP lines, there was no difference in free-running period between ethanol naïve HAP2 and LAP2 mice. HAP2 mice, however, were significantly more active than LAP2 mice as measured by general home-cage movement and wheel running, a motivated behavior implicating a selection effect on reward systems. Despite a marked circadian regulation of drinking behavior, the free-running and entrained period of the circadian clock does not determine daily ethanol intake. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Whole blood coagulation and platelet activation in the athlete: A comparison of marathon, triathlon and long distance cycling

    PubMed Central

    2010-01-01

    Introduction Serious thrombembolic events occur in otherwise healthy marathon athletes during competition. We tested the hypothesis that during heavy endurance sports coagulation and platelets are activated depending on the type of endurance sport with respect to its running fraction. Materials and Methods 68 healthy athletes participating in marathon (MAR, running 42 km, n = 24), triathlon (TRI, swimming 2.5 km + cycling 90 km + running 21 km, n = 22), and long distance cycling (CYC, 151 km, n = 22) were included in the study. Blood samples were taken before and immediately after completion of competition to perform rotational thrombelastometry. We assessed coagulation time (CT), maximum clot firmness (MCF) after intrinsically activation and fibrin polymerization (FIBTEM). Furthermore, platelet aggregation was tested after activation with ADP and thrombin activating peptide 6 (TRAP) by using multiple platelet function analyzer. Results Complete data sets were obtained in 58 athletes (MAR: n = 20, TRI: n = 19, CYC: n = 19). CT significantly decreased in all groups (MAR -9.9%, TRI -8.3%, CYC -7.4%) without differences between groups. In parallel, MCF (MAR +7.4%, TRI +6.1%, CYC +8.3%) and fibrin polymerization (MAR +14.7%, TRI +6.1%, CYC +8.3%) were significantly increased in all groups. However, platelets were only activated during MAR and TRI as indicated by increased AUC during TRAP-activation (MAR +15.8%) and increased AUC during ADP-activation in MAR (+50.3%) and TRI (+57.5%). Discussion While coagulation is activated during physical activity irrespective of type we observed significant platelet activation only during marathon and to a lesser extent during triathlon. We speculate that prolonged running may increase platelet activity, possibly, due to mechanical alteration. Thus, particularly prolonged running may increase the risk of thrombembolic incidents in running athletes. PMID:20452885

  20. Ethanol consumption in mice: relationships with circadian period and entrainment

    PubMed Central

    Trujillo, Jennifer L.; Do, David T.; Grahame, Nicholas J.; Roberts, Amanda J.; Gorman, Michael R.

    2011-01-01

    A functional connection between the circadian timing system and alcohol consumption is suggested by multiple lines of converging evidence. Ethanol consumption perturbs physiological rhythms in hormone secretion, sleep and body temperature, and conversely, genetic and environmental perturbations of the circadian system can alter alcohol intake. A fundamental property of the circadian pacemaker, the endogenous period of its cycle under free-running conditions, was previously shown to differ between selectively bred High- (HAP) and Low- (LAP) Alcohol Preferring replicate 1 mice. To test whether there is a causal relationship between circadian period and ethanol intake, we induced experimental, rather than genetic, variations in free-running period. Male inbred C57Bl/6J mice and replicate 2 male and female HAP2 and LAP2 mice were entrained to light:dark cycles of 26 h or 22 h or remained in a standard 24 h cycle. Upon discontinuation of the light:dark cycle, experimental animals exhibited longer and shorter free-running periods, respectively. Despite robust effects on circadian period and clear circadian rhythms in drinking, these manipulations failed to alter the daily ethanol intake of the inbred strain or selected lines. Likewise, driving the circadian system at long and short periods produced no change in alcohol intake. In contrast with replicate 1 HAP and LAP lines, there was no difference in free-running period between ethanol naïve HAP2 and LAP2 mice. HAP2 mice, however, were significantly more active than LAP2 mice as measured by general home-cage movement and wheel running, a motivated behavior implicating a selection effect on reward systems. Despite a marked circadian regulation of drinking behavior, the free-running and entrained period of the circadian clock does not determine daily ethanol intake. PMID:20880659

  1. Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch.

    PubMed

    Hoffmann, Thomas J

    2011-03-01

    It is often useful to rerun a command line R script with some slight change in the parameters used to run it - a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster.

  2. A novel PMT test system based on waveform sampling

    NASA Astrophysics Data System (ADS)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  3. Effects of an individual 12-week community-located "start-to-run" program on physical capacity, walking, fatigue, cognitive function, brain volumes, and structures in persons with multiple sclerosis.

    PubMed

    Feys, Peter; Moumdjian, Lousin; Van Halewyck, Florian; Wens, Inez; Eijnde, Bert O; Van Wijmeersch, Bart; Popescu, Veronica; Van Asch, Paul

    2017-11-01

    Exercise therapy studies in persons with multiple sclerosis (pwMS) primarily focused on motor outcomes in mid disease stage, while cognitive function and neural correlates were only limitedly addressed. This pragmatic randomized controlled study investigated the effects of a remotely supervised community-located "start-to-run" program on physical and cognitive function, fatigue, quality of life, brain volume, and connectivity. In all, 42 pwMS were randomized to either experimental (EXP) or waiting list control (WLC) group. The EXP group received individualized training instructions during 12 weeks (3×/week), to be performed in their community aiming to participate in a running event. Measures were physical (VO 2max , sit-to-stand test, Six-Minute Walk Test (6MWT), Multiple Sclerosis Walking Scale-12 (MSWS-12)) and cognitive function (Rao's Brief Repeatable Battery (BRB), Paced Auditory Serial Attention Test (PASAT)), fatigue (Fatigue Scale for Motor and Cognitive Function (FSMC)), quality of life (Multiple Sclerosis Impact Scale-29 (MSIS-29)), and imaging. Brain volumes and diffusion tensor imaging (DTI) were quantified using FSL-SIENA/FIRST and FSL-TBSS. In all, 35 pwMS completed the trial. Interaction effects in favor of the EXP group were found for VO 2max , sit-to-stand test, MSWS-12, Spatial Recall Test, FSMC, MSIS-29, and pallidum volume. VO 2max improved by 1.5 mL/kg/min, MSWS-12 by 4, FSMC by 11, and MSIS-29 by 14 points. The Spatial Recall Test improved by more than 10%. Community-located run training improved aerobic capacity, functional mobility, visuospatial memory, fatigue, and quality of life and pallidum volume in pwMS.

  4. Mean platelet volume (MPV) predicts middle distance running performance.

    PubMed

    Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico

    2014-01-01

    Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running performance. The significant association between baseline MPV and running time suggest that hyperactive platelets may exert some pleiotropic effects on endurance performance.

  5. QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.

    PubMed

    Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei

    2014-01-01

    Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.

  6. Rover Attitude and Pointing System Simulation Testbed

    NASA Technical Reports Server (NTRS)

    Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam

    2009-01-01

    The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.

  7. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI

    PubMed Central

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert

    2016-01-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778

  9. The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI.

    PubMed

    Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert

    2016-04-01

    Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.

  11. Theta phase precession of grid and place cell firing in open environments

    PubMed Central

    Jeewajee, A.; Barry, C.; Douchamps, V.; Manson, D.; Lever, C.; Burgess, N.

    2014-01-01

    Place and grid cells in the rodent hippocampal formation tend to fire spikes at successively earlier phases relative to the local field potential theta rhythm as the animal runs through the cell's firing field on a linear track. However, this ‘phase precession’ effect is less well characterized during foraging in two-dimensional open field environments. Here, we mapped runs through the firing fields onto a unit circle to pool data from multiple runs. We asked which of seven behavioural and physiological variables show the best circular–linear correlation with the theta phase of spikes from place cells in hippocampal area CA1 and from grid cells from superficial layers of medial entorhinal cortex. The best correlate was the distance to the firing field peak projected onto the animal's current running direction. This was significantly stronger than other correlates, such as instantaneous firing rate and time-in-field, but similar in strength to correlates with other measures of distance travelled through the firing field. Phase precession was stronger in place cells than grid cells overall, and robust phase precession was seen in traversals through firing field peripheries (although somewhat less than in traversals through the centre), consistent with phase coding of displacement along the current direction. This type of phase coding, of place field distance ahead of or behind the animal, may be useful for allowing calculation of goal directions during navigation. PMID:24366140

  12. Correction of Microplate Data from High-Throughput Screening.

    PubMed

    Wang, Yuhong; Huang, Ruili

    2016-01-01

    High-throughput screening (HTS) makes it possible to collect cellular response data from a large number of cell lines and small molecules in a timely and cost-effective manner. The errors and noises in the microplate-formatted data from HTS have unique characteristics, and they can be generally grouped into three categories: run-wise (temporal, multiple plates), plate-wise (background pattern, single plate), and well-wise (single well). In this chapter, we describe a systematic solution for identifying and correcting such errors and noises, mainly basing on pattern recognition and digital signal processing technologies.

  13. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps

    PubMed Central

    Md. Sabri, Aznul Qalid; Loo, Chu Kiong; Mansoor, Ali Mohammed

    2018-01-01

    This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM). It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor’s malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM. PMID:29702697

  14. webMGR: an online tool for the multiple genome rearrangement problem.

    PubMed

    Lin, Chi Ho; Zhao, Hao; Lowcay, Sean Harry; Shahab, Atif; Bourque, Guillaume

    2010-02-01

    The algorithm MGR enables the reconstruction of rearrangement phylogenies based on gene or synteny block order in multiple genomes. Although MGR has been successfully applied to study the evolution of different sets of species, its utilization has been hampered by the prohibitive running time for some applications. In the current work, we have designed new heuristics that significantly speed up the tool without compromising its accuracy. Moreover, we have developed a web server (webMGR) that includes elaborate web output to facilitate navigation through the results. webMGR can be accessed via http://www.gis.a-star.edu.sg/~bourque. The source code of the improved standalone version of MGR is also freely available from the web site. Supplementary data are available at Bioinformatics online.

  15. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  16. HOPE: An On-Line Piloted Handling Qualities Experiment Data Book

    NASA Technical Reports Server (NTRS)

    Jackson, E. B.; Proffitt, Melissa S.

    2010-01-01

    A novel on-line database for capturing most of the information obtained during piloted handling qualities experiments (either flight or simulated) is described. The Hyperlinked Overview of Piloted Evaluations (HOPE) web application is based on an open-source object-oriented Web-based front end (Ruby-on-Rails) that can be used with a variety of back-end relational database engines. The hyperlinked, on-line data book approach allows an easily-traversed way of looking at a variety of collected data, including pilot ratings, pilot information, vehicle and configuration characteristics, test maneuvers, and individual flight test cards and repeat runs. It allows for on-line retrieval of pilot comments, both audio and transcribed, as well as time history data retrieval and video playback. Pilot questionnaires are recorded as are pilot biographies. Simple statistics are calculated for each selected group of pilot ratings, allowing multiple ways to aggregate the data set (by pilot, by task, or by vehicle configuration, for example). Any number of per-run or per-task metrics can be captured in the database. The entire run metrics dataset can be downloaded in comma-separated text for further analysis off-line. It is expected that this tool will be made available upon request

  17. High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  18. Positional Role Differences in the Aerobic and Anaerobic Power of Elite Basketball Players.

    PubMed

    Pojskić, Haris; Šeparović, Vlatko; Užičanin, Edin; Muratović, Melika; Mačković, Samir

    2015-12-22

    The aim of the present study was to compare the aerobic and anaerobic power and capacity of elite male basketball players who played multiple positions. Fifty-five healthy players were divided into the following three different subsamples according to their positional role: guards (n = 22), forwards (n = 19) and centers (n = 14). The following three tests were applied to estimate their aerobic and anaerobic power and capacities: the countermovement jump (CMJ), a multistage shuttle run test and the Running-based Anaerobic Sprint Test (RAST). The obtained data were used to calculate the players' aerobic and anaerobic power and capacities. To determine the possible differences between the subjects considering their different positions on the court, one-way analysis of variance (ANOVA) with the Bonferroni post-hoc test for multiple comparisons was used. The results showed that there was a significant difference between the different groups of players in eleven out of sixteen measured variables. Guards and forwards exhibited greater aerobic and relative values of anaerobic power, allowing shorter recovery times and the ability to repeat high intensity, basketball-specific activities. Centers presented greater values of absolute anaerobic power and capacities, permitting greater force production during discrete tasks. Coaches can use these data to create more individualized strength and conditioning programs for different positional roles.

  19. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  20. Implementing Audio Digital Feedback Loop Using the National Instruments RIO System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, G.; Byrd, J. M.

    2006-11-20

    Development of system for high precision RF distribution and laser synchronization at Berkeley Lab has been ongoing for several years. Successful operation of these systems requires multiple audio bandwidth feedback loops running at relatively high gains. Stable operation of the feedback loops requires careful design of the feedback transfer function. To allow for flexible and compact implementation, we have developed digital feedback loops on the National Instruments Reconfigurable Input/Output (RIO) platform. This platform uses an FPGA and multiple I/Os that can provide eight parallel channels running different filters. We present the design and preliminary experimental results of this system.

  1. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  2. Steady state preparative multiple dual mode counter-current chromatography: Productivity and selectivity. Theory and experimental verification.

    PubMed

    Kostanyan, Artak E; Erastov, Andrey A

    2015-08-07

    In the steady state (SS) multiple dual mode (MDM) counter-current chromatography (CCC), at the beginning of the first step of every cycle the sample dissolved in one of the phases is continuously fed into a CCC device over a constant time, not exceeding the run time of the first step. After a certain number of cycles, the steady state regime is achieved, where concentrations vary over time during each cycle, however, the concentration profiles of solutes eluted with both phases remain constant in all subsequent cycles. The objective of this work was to develop analytical expressions to describe the SS MDM CCC separation processes, which can be helpful to simulate and design these processes and select a suitable compromise between the productivity and the selectivity in the preparative and production CCC separations. Experiments carried out using model mixtures of compounds from the GUESSmix with solvent system hexane/ethyl acetate/methanol/water demonstrated a reasonable agreement between the predictions of the theory and the experimental results. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The neural correlates of risky decision making across short and long runs

    PubMed Central

    Rao, Li-Lin; Dunn, John C.; Zhou, Yuan; Li, Shu

    2015-01-01

    People frequently change their preferences for options of gambles which they play once compared to those they play multiple times. In general, preferences for repeated play gambles are more consistent with the expected values of the options. According to the one-process view, the change in preference is due to a change in the structure of the gamble that is relevant to decision making. According to the two-process view, the change is attributable to a shift in the decision making strategy that is used. To adjudicate between these two theories, we asked participants to choose between gambles played once or 100 times, and to choose between them based on their expected value. Consistent with the two-process theory, we found a set of brain regions that were sensitive to the extent of behavioral change between single and aggregated play and also showed significant (de)activation in the expected value choice task. These results support the view that people change their decision making strategies for risky choice considered once or multiple times. PMID:26516095

  4. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  5. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    PubMed

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  6. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  7. Design and implementation of a software package to control a network of robotic observatories

    NASA Astrophysics Data System (ADS)

    Tuparev, G.; Nicolova, I.; Zlatanov, B.; Mihova, D.; Popova, I.; Hessman, F. V.

    2006-09-01

    We present a description of a reusable software package able to control a large, heterogeneous network of fully and semi-robotic observatories initially developed to run the MONET network of two 1.2 m telescopes. Special attention is given to the design of a robust, long-term observation scheduler which also allows the trading of observation time and facilities within various networks. The handling of the ``Phase I&II" project-development process, the time-accounting between complex organizational structures, and usability issues for making the package accessible not only to professional astronomers, but also to amateurs and high-school students is discussed. A simple RTML-based solution to link multiple networks is demonstrated.

  8. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    USGS Publications Warehouse

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  9. Barefoot running claims and controversies: a review of the literature.

    PubMed

    Jenkins, David W; Cauthon, David J

    2011-01-01

    Barefoot running is slowly gaining a dedicated following. Proponents of barefoot running claim many benefits, such as improved performance and reduced injuries, whereas detractors warn of the imminent risks involved. Multiple publications were reviewed using key words. A review of the literature uncovered many studies that have looked at the barefoot condition and found notable differences in gait and other parameters. These findings, along with much anecdotal information, can lead one to extrapolate that barefoot runners should have fewer injuries, better performance, or both. Several athletic shoe companies have designed running shoes that attempt to mimic the barefoot condition and, thus, garner the purported benefits of barefoot running. Although there is no evidence that either confirms or refutes improved performance and reduced injuries in barefoot runners, many of the claimed disadvantages to barefoot running are not supported by the literature. Nonetheless, it seems that barefoot running may be an acceptable training method for athletes and coaches who understand and can minimize the risks.

  10. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  11. Preventing Run-Time Bugs at Compile-Time Using Advanced C++

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, Richard

    When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.

  12. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  13. Relationship of physical activity to fundamental movement skills among adolescents.

    PubMed

    Okely, A D; Booth, M L; Patterson, J W

    2001-11-01

    To determine the relationship of participation in organized and nonorganized physical activity with fundamental movement skills among adolescents. Male and female children in Grade 8 (mean age, 13.3 yr) and Grade 10 (mean age, 15.3 yr) were assessed on six fundamental movement skills (run, vertical jump, catch, overhand throw, forehand strike, and kick). Physical activity was assessed using a self-report recall measure where students reported the type, duration, and frequency of participation in organized physical activity and nonorganized physical activity during a usual week. Multiple regression analysis indicated that fundamental movement skills significantly predicted time in organized physical activity, although the percentage of variance it could explain was small. This prediction was stronger for girls than for boys. Multiple regression analysis showed no relationship between time in nonorganized physical activity and fundamental movement skills. Fundamental movement skills are significantly associated with adolescents' participation in organized physical activity, but predict only a small portion of it.

  14. Optimal chemotaxis in intermittent migration of animal cells

    NASA Astrophysics Data System (ADS)

    Romanczuk, P.; Salbreux, G.

    2015-04-01

    Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.

  15. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  16. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  17. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  18. Short-term scheduling of an open-pit mine with multiple objectives

    NASA Astrophysics Data System (ADS)

    Blom, Michelle; Pearce, Adrian R.; Stuckey, Peter J.

    2017-05-01

    This article presents a novel algorithm for the generation of multiple short-term production schedules for an open-pit mine, in which several objectives, of varying priority, characterize the quality of each solution. A short-term schedule selects regions of a mine site, known as 'blocks', to be extracted in each week of a planning horizon (typically spanning 13 weeks). Existing tools for constructing these schedules use greedy heuristics, with little optimization. To construct a single schedule in which infrastructure is sufficiently utilized, with production grades consistently close to a desired target, a planner must often run these heuristics many times, adjusting parameters after each iteration. A planner's intuition and experience can evaluate the relative quality and mineability of different schedules in a way that is difficult to automate. Of interest to a short-term planner is the generation of multiple schedules, extracting available ore and waste in varying sequences, which can then be manually compared. This article presents a tool in which multiple, diverse, short-term schedules are constructed, meeting a range of common objectives without the need for iterative parameter adjustment.

  19. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  20. Impacts of conservation and human development policy across stakeholders and scales

    PubMed Central

    Li, Cong; Zheng, Hua; Li, Shuzhuo; Chen, Xiaoshu; Li, Jie; Zeng, Weihong; Liang, Yicheng; Polasky, Stephen; Feldman, Marcus W.; Ruckelshaus, Mary; Ouyang, Zhiyun; Daily, Gretchen C.

    2015-01-01

    Ideally, both ecosystem service and human development policies should improve human well-being through the conservation of ecosystems that provide valuable services. However, program costs and benefits to multiple stakeholders, and how they change through time, are rarely carefully analyzed. We examine one of China’s new ecosystem service protection and human development policies: the Relocation and Settlement Program of Southern Shaanxi Province (RSP), which pays households who opt voluntarily to resettle from mountainous areas. The RSP aims to reduce disaster risk, restore important ecosystem services, and improve human well-being. We use household surveys and biophysical data in an integrated economic cost–benefit analysis for multiple stakeholders. We project that the RSP will result in positive net benefits to the municipal government, and to cross-region and global beneficiaries over the long run along with environment improvement, including improved water quality, soil erosion control, and carbon sequestration. However, there are significant short-run relocation costs for local residents so that poor households may have difficulty participating because they lack the resources to pay the initial costs of relocation. Greater subsidies and subsequent supports after relocation are necessary to reduce the payback period of resettled households in the long run. Compensation from downstream beneficiaries for improved water and from carbon trades could be channeled into reducing relocation costs for the poor and sharing the burden of RSP implementation. The effectiveness of the RSP could also be greatly strengthened by early investment in developing human capital and environment-friendly jobs and establishing long-term mechanisms for securing program goals. These challenges and potential solutions pervade ecosystem service efforts globally. PMID:26082546

  1. Competitive market for multiple firms and economic crisis

    NASA Astrophysics Data System (ADS)

    Tao, Yong

    2010-09-01

    The origin of economic crises is a key problem for economics. We present a model of long-run competitive markets to show that the multiplicity of behaviors in an economic system, over a long time scale, emerge as statistical regularities (perfectly competitive markets obey Bose-Einstein statistics and purely monopolistic-competitive markets obey Boltzmann statistics) and that how interaction among firms influences the evolutionary of competitive markets. It has been widely accepted that perfect competition is most efficient. Our study shows that the perfectly competitive system, as an extreme case of competitive markets, is most efficient but not stable, and gives rise to economic crises as society reaches full employment. In the economic crisis revealed by our model, many firms condense (collapse) into the lowest supply level (zero supply, namely, bankruptcy status), in analogy to Bose-Einstein condensation. This curious phenomenon arises because perfect competition (homogeneous competitions) equals symmetric (indistinguishable) investment direction, a fact abhorred by nature. Therefore, we urge the promotion of monopolistic competition (heterogeneous competitions) rather than perfect competition. To provide early warning of economic crises, we introduce a resolving index of investment, which approaches zero in the run-up to an economic crisis. On the other hand, our model discloses, as a profound conclusion, that the technological level for a long-run social or economic system is proportional to the freedom (disorder) of this system; in other words, technology equals the entropy of system. As an application of this concept, we give a possible answer to the Needham question: “Why was it that despite the immense achievements of traditional China it had been in Europe and not in China that the scientific and industrial revolutions occurred?”

  2. Competitive market for multiple firms and economic crisis.

    PubMed

    Tao, Yong

    2010-09-01

    The origin of economic crises is a key problem for economics. We present a model of long-run competitive markets to show that the multiplicity of behaviors in an economic system, over a long time scale, emerge as statistical regularities (perfectly competitive markets obey Bose-Einstein statistics and purely monopolistic-competitive markets obey Boltzmann statistics) and that how interaction among firms influences the evolutionary of competitive markets. It has been widely accepted that perfect competition is most efficient. Our study shows that the perfectly competitive system, as an extreme case of competitive markets, is most efficient but not stable, and gives rise to economic crises as society reaches full employment. In the economic crisis revealed by our model, many firms condense (collapse) into the lowest supply level (zero supply, namely, bankruptcy status), in analogy to Bose-Einstein condensation. This curious phenomenon arises because perfect competition (homogeneous competitions) equals symmetric (indistinguishable) investment direction, a fact abhorred by nature. Therefore, we urge the promotion of monopolistic competition (heterogeneous competitions) rather than perfect competition. To provide early warning of economic crises, we introduce a resolving index of investment, which approaches zero in the run-up to an economic crisis. On the other hand, our model discloses, as a profound conclusion, that the technological level for a long-run social or economic system is proportional to the freedom (disorder) of this system; in other words, technology equals the entropy of system. As an application of this concept, we give a possible answer to the Needham question: "Why was it that despite the immense achievements of traditional China it had been in Europe and not in China that the scientific and industrial revolutions occurred?"

  3. Barefoot versus shoe running: from the past to the present.

    PubMed

    Kaplan, Yonatan

    2014-02-01

    Barefoot running is not a new concept, but relatively few people choose to engage in barefoot running on a regular basis. Despite the technological developments in modern running footwear, as many as 79% of runners are injured every year. Although benefits of barefoot running have been proposed, there are also potential risks associated with it. To review the evidence-based literature concerning barefoot/minimal footwear running and the implications for the practicing physician. Multiple publications were reviewed using an electronic search of databases such as Medline, Cinahl, Embase, PubMed, and Cochrane Database from inception until August 30, 2013 using the search terms barefoot running, barefoot running biomechanics, and shoe vs. barefoot running. Ninety-six relevant articles were found. Most were reviews of biomechanical and kinematic studies. There are notable differences in gait and other parameters between barefoot running and shoe running. Based on these findings and much anecdotal evidence, one could conclude that barefoot runners should have fewer injuries, better performance, or both. Several athletic shoe companies have designed running shoes that attempt to mimic the barefoot condition, and thus garner the purported benefits of barefoot running. Although there is no evidence that confirms or refutes improved performance and reduced injuries in barefoot runners, many of the claimed disadvantages to barefoot running are not supported by the literature. Nonetheless, it seems that barefoot running may be an acceptable training method for athletes and coaches, as it may minimize the risks of injury.

  4. The Error Reporting in the ATLAS TDAQ System

    NASA Astrophysics Data System (ADS)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one of the available class constructors and send this instance to ERS. This paper presents the original design solutions exploited for the ERS implementation and describes how it was used during the first ATLAS run period. The cross-system error reporting standardization introduced by ERS was one of the key points for the successful implementation of automated mechanisms for online error recovery.

  5. Federated queries of clinical data repositories: the sum of the parts does not equal the whole

    PubMed Central

    Weber, Griffin M

    2013-01-01

    Background and objective In 2008 we developed a shared health research information network (SHRINE), which for the first time enabled research queries across the full patient populations of four Boston hospitals. It uses a federated architecture, where each hospital returns only the aggregate count of the number of patients who match a query. This allows hospitals to retain control over their local databases and comply with federal and state privacy laws. However, because patients may receive care from multiple hospitals, the result of a federated query might differ from what the result would be if the query were run against a single central repository. This paper describes the situations when this happens and presents a technique for correcting these errors. Methods We use a one-time process of identifying which patients have data in multiple repositories by comparing one-way hash values of patient demographics. This enables us to partition the local databases such that all patients within a given partition have data at the same subset of hospitals. Federated queries are then run separately on each partition independently, and the combined results are presented to the user. Results Using theoretical bounds and simulated hospital networks, we demonstrate that once the partitions are made, SHRINE can produce more precise estimates of the number of patients matching a query. Conclusions Uncertainty in the overlap of patient populations across hospitals limits the effectiveness of SHRINE and other federated query tools. Our technique reduces this uncertainty while retaining an aggregate federated architecture. PMID:23349080

  6. Reducing Overutilization of Testing for Clostridium difficile Infection in a Pediatric Hospital System: A Quality Improvement Initiative.

    PubMed

    Klatte, J Michael; Selvarangan, Rangaraj; Jackson, Mary Anne; Myers, Angela L

    2016-01-01

    Study objectives included addressing overuse of Clostridium difficile laboratory testing by decreasing submission rates of nondiarrheal stool specimens and specimens from children ≤12 months of age and determining resultant patient and laboratory cost savings associated with decreased testing. A multifaceted initiative was developed, and components included multiple provider education methods, computerized order entry modifications, and automatic declination from laboratory on testing stool specimens of nondiarrheal consistency and from children ≤12 months old. A run chart, demonstrating numbers of nondiarrheal plus infant stool specimens submitted over time, was developed to analyze the initiative's impact on clinicians' test-ordering practices. A p-chart was generated to evaluate the percentage of these submitted specimens tested biweekly over a 12-month period. Cost savings for patients and the laboratory were assessed at the study period's conclusion. Run chart analysis revealed an initial shift after the interventions, suggesting a temporary decrease in testing submission; however, no sustained differences in numbers of specimens submitted biweekly were observed over time. On the p-chart, the mean percentage of specimens tested before the intervention was 100%. After the intervention, the average percentage of specimens tested dropped to 53.8%. Resultant laboratory cost savings totaled nearly $3600, and patient savings on testing charges were ∼$32 000. Automatic laboratory declination of nondiarrheal stools submitted for CDI testing resulted in a sustained decrease in the number of specimens tested, resulting in significant laboratory and patient cost savings. Despite multiple educational efforts, no sustained changes in physician ordering practices were observed. Copyright © 2016 by the American Academy of Pediatrics.

  7. Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.

    PubMed

    Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M

    2015-05-01

    Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.

  8. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    NASA Astrophysics Data System (ADS)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  9. Evaluation and Testing of the ADVANTG Code on SNM Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Pacific Northwest National Laboratory (PNNL) has been tasked with evaluating the effectiveness of ORNL’s new hybrid transport code, ADVANTG, on scenarios of interest to our NA-22 sponsor, specifically of detection of diversion of special nuclear material (SNM). PNNL staff have determined that acquisition and installation of ADVANTG was relatively straightforward for a code in its phase of development, but probably not yet sufficient for mass distribution to the general user. PNNL staff also determined that with little effort, ADVANTG generated weight windows that typically worked for the problems and generated results consistent with MCNP. With slightly greater effort of choosingmore » a finer mesh around detectors or sample reaction tally regions, the figure of merit (FOM) could be further improved in most cases. This does take some limited knowledge of deterministic transport methods. The FOM could also be increased by limiting the energy range for a tally to the energy region of greatest interest. It was then found that an MCNP run with the full energy range for the tally showed improved statistics in the region used for the ADVANTG run. The specific case of interest chosen by the sponsor is the CIPN project from Las Alamos National Laboratory (LANL), which is an active interrogation, non-destructive assay (NDA) technique to quantify the fissile content in a spent fuel assembly and is also sensitive to cases of material diversion. Unfortunately, weight windows for the CIPN problem cannot currently be properly generated with ADVANTG due to inadequate accommodations for source definition. ADVANTG requires that a fixed neutron source be defined within the problem and cannot account for neutron multiplication. As such, it is rendered useless in active interrogation scenarios. It is also interesting to note that this is a difficult problem to solve and that the automated weight windows generator in MCNP actually slowed down the problem. Therefore, PNNL had determined that there is not an effective tool available for speeding up MCNP for problems such as the CIPN scenario. With regard to the Benchmark scenarios, ADVANTG performed very well for most of the difficult, long-running, standard radiation detection scenarios. Specifically, run time speedups were observed for spatially large scenarios, or those having significant shielding or scattering geometries. ADVANTG performed on par with existing codes for moderate sized scenarios, or those with little to moderate shielding, or multiple paths to the detectors. ADVANTG ran slower than MCNP for very simply, spatially small cases with little to no shielding that run very quickly anyway. Lastly, ADVANTG could not solve problems that did not consist of fixed source to detector geometries. For example, it could not solve scenarios with multiple detectors or secondary particles, such as active interrogation, neutron induced gamma, or fission neutrons.« less

  10. 16 CFR 803.10 - Running of time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...

  11. Age of child, more than HPV type, is associated with clinical course in recurrent respiratory papillomatosis.

    PubMed

    Buchinsky, Farrel J; Donfack, Joseph; Derkay, Craig S; Choi, Sukgi S; Conley, Stephen F; Myer, Charles M; McClay, John E; Campisi, Paolo; Wiatrak, Brian J; Sobol, Steven E; Schweinfurth, John M; Tsuji, Domingos H; Hu, Fen Z; Rockette, Howard E; Ehrlich, Garth D; Post, J Christopher

    2008-05-28

    RRP is a devastating disease in which papillomas in the airway cause hoarseness and breathing difficulty. The disease is caused by human papillomavirus (HPV) 6 or 11 and is very variable. Patients undergo multiple surgeries to maintain a patent airway and in order to communicate vocally. Several small studies have been published in which most have noted that HPV 11 is associated with a more aggressive course. Papilloma biopsies were taken from patients undergoing surgical treatment of RRP and were subjected to HPV typing. 118 patients with juvenile-onset RRP with at least 1 year of clinical data and infected with a single HPV type were analyzed. HPV 11 was encountered in 40% of the patients. By our definition, most of the patients in the sample (81%) had run an aggressive course. The odds of a patient with HPV 11 running an aggressive course were 3.9 times higher than that of patients with HPV 6 (Fisher's exact p = 0.017). However, clinical course was more closely associated with age of the patient (at diagnosis and at the time of the current surgery) than with HPV type. Patients with HPV 11 were diagnosed at a younger age (2.4y) than were those with HPV 6 (3.4y) (p = 0.014). Both by multiple linear regression and by multiple logistic regression HPV type was only weakly associated with metrics of disease course when simultaneously accounting for age. CONCLUSIONS/SIGNIFICANCE ABSTRACT: The course of RRP is variable and a quarter of the variability can be accounted for by the age of the patient. HPV 11 is more closely associated with a younger age at diagnosis than it is associated with an aggressive clinical course. These data suggest that there are factors other than HPV type and age of the patient that determine disease course.

  12. Clumpak: a program for identifying clustering modes and packaging population structure inferences across K.

    PubMed

    Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay

    2015-09-01

    The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.

  13. Altered Running Economy Directly Translates to Altered Distance-Running Performance.

    PubMed

    Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger

    2016-11-01

    Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.

  14. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  15. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    PubMed

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  16. Documentation of a restart option for the U.S. Geological Survey coupled Groundwater and Surface-Water Flow (GSFLOW) model

    USGS Publications Warehouse

    Regan, R. Steve; Niswonger, Richard G.; Markstrom, Steven L.; Barlow, Paul M.

    2015-10-02

    The spin-up simulation should be run for a sufficient length of time necessary to establish antecedent conditions throughout a model domain. Each GSFLOW application can require different lengths of time to account for the hydrologic stresses to propagate through a coupled groundwater and surface-water system. Typically, groundwater hydrologic processes require many years to come into equilibrium with dynamic climate and other forcing (or stress) data, such as precipitation and well pumping, whereas runoff-dominated surface-water processes respond relatively quickly. Use of a spin-up simulation can substantially reduce execution-time requirements for applications where the time period of interest is small compared to the time for hydrologic memory; thus, use of the restart option can be an efficient strategy for forecast and calibration simulations that require multiple simulations starting from the same day.

  17. Generalized conformal structure, dilaton gravity and SYK

    NASA Astrophysics Data System (ADS)

    Taylor, Marika

    2018-01-01

    A theory admits generalized conformal structure if the only scale in the quantum theory is set by a dimensionful coupling. SYK is an example of a theory with generalized conformal structure and in this paper we investigate the consequences of this structure for correlation functions and for the holographic realization of SYK. The Ward identities associated with the generalized conformal structure of SYK are implemented holographically in gravity/multiple scalar theories, which always have a parent AdS3 origin. For questions involving only the graviton/running scalar sector, one can always describe the bulk running in terms of a single scalar but multiple running scalars are in general needed once one includes the bulk fields corresponding to all SYK operators. We then explore chaos in holographic theories with generalized conformal structure. The four point function explored by Maldacena, Shenker and Stanford exhibits exactly the same chaotic behaviour in any such theory as in holographic realizations of conformal theories i.e. the dimensionful coupling scale does not affect the chaotic exponential growth.

  18. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  19. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  20. 40 CFR Table 1 to Subpart III of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...

  1. 40 CFR Table 1 to Subpart Eeee of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...

  2. Impact of physical fitness and body composition on injury risk among active young adults: A study of Army trainees.

    PubMed

    Jones, Bruce H; Hauret, Keith G; Dye, Shamola K; Hauschild, Veronique D; Rossi, Stephen P; Richardson, Melissa D; Friedl, Karl E

    2017-11-01

    To determine the combined effects of physical fitness and body composition on risk of training-related musculoskeletal injuries among Army trainees. Retrospective cohort study. Rosters of soldiers entering Army basic combat training (BCT) from 2010 to 2012 were linked with data from multiple sources for age, sex, physical fitness (heights, weights (mass), body mass index (BMI), 2 mile run times, push-ups), and medical injury diagnoses. Analyses included descriptive means and standard deviations, comparative t-tests, risks of injury, and relative risks (RR) and 95% confidence intervals (CI). Fitness and BMI were divided into quintiles (groups of 20%) and stratified for chi-square (χ 2 ) comparisons and to determine trends. Data were obtained for 143,398 men and 41,727 women. As run times became slower, injury risks increased steadily (men=9.8-24.3%, women=26.5-56.0%; χ 2 trends (p<0.00001)). For both genders, the relationship of BMI to injury risk was bimodal, with the lowest risk in the average BMI group (middle quintile). Injury risks were highest in the slowest groups with lowest BMIs (male trainees=26.5%; female trainees=63.1%). Compared to lowest risk group (average BMI with fastest run-times), RRs were significant (male trainees=8.5%; RR 3.1, CI: 2.8-3.4; female trainees=24.6%; RR 2.6, CI: 2.3-2.8). Trainees with the lowest BMIs exhibited highest injury risks for both genders and across all fitness levels. While the most aerobically fit Army trainees experience lower risk of training-related injury, at any given aerobic fitness level those with the lowest BMIs are at highest risk. This has implications for recruitment and retention fitness standards. Copyright © 2017. Published by Elsevier Ltd.

  3. Effect of Minimalist Footwear on Running Efficiency

    PubMed Central

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  4. Multi-instance learning based on instance consistency for image retrieval

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  5. PHOBOS, the Early Years

    NASA Astrophysics Data System (ADS)

    Stephans, George S. F.; Back, B. B.; Baker, M. D.; Barton, D. S.; Betts, R. R.; Ballintijn, M.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; Garcia, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Holynski, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Michalowski, J.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steadman, S. G.; Steinberg, P.; Stephans, G. S. F.; Stodulski, M.; Sukhanov, A.; Tang, J.-L.; Teng, R.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wadsworth, B.; Wolfs, F. L. H.; Wosiek, B.; Wozniak, K.; Wuosmaa, A. H.; Wyslouch, B.

    2002-06-01

    The PHOBOS detector, one of the two small experiments at RHIC, focuses on measurements of charged particle multiplicity over almost the full phase space and identified particles near mid-rapidity. Results will be presented from the early RHIC gold--gold runs at nucleon--nucleon center of mass energies of 56 and 130 GeV as well as the recently concluded run at the full RHIC energy of 200 GeV.

  6. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  7. A Hybrid OFDM-TDM Architecture with Decentralized Dynamic Bandwidth Allocation for PONs

    PubMed Central

    Cevik, Taner

    2013-01-01

    One of the major challenges of passive optical networks is to achieve a fair arbitration mechanism that will prevent possible collisions from occurring at the upstream channel when multiple users attempt to access the common fiber at the same time. Therefore, in this study we mainly focus on fair bandwidth allocation among users, and present a hybrid Orthogonal Frequency Division Multiplexed/Time Division Multiplexed architecture with a dynamic bandwidth allocation scheme that provides satisfying service qualities to the users depending on their varying bandwidth requirements. Unnecessary delays in centralized schemes occurring during bandwidth assignment stage are eliminated by utilizing a decentralized approach. Instead of sending bandwidth demands to the optical line terminal (OLT) which is the only competent authority, each optical network unit (ONU) runs the same bandwidth demand determination algorithm. ONUs inform each other via signaling channel about the status of their queues. This information is fed to the bandwidth determination algorithm which is run by each ONU in a distributed manner. Furthermore, Light Load Penalty, which is a phenomenon in optical communications, is mitigated by limiting the amount of bandwidth that an ONU can demand. PMID:24194684

  8. Continued Development of a Global Heat Transfer Measurement System at AEDC Hypervelocity Wind Tunnel 9

    NASA Technical Reports Server (NTRS)

    Kurits, Inna; Lewis, M. J.; Hamner, M. P.; Norris, Joseph D.

    2007-01-01

    Heat transfer rates are an extremely important consideration in the design of hypersonic vehicles such as atmospheric reentry vehicles. This paper describes the development of a data reduction methodology to evaluate global heat transfer rates using surface temperature-time histories measured with the temperature sensitive paint (TSP) system at AEDC Hypervelocity Wind Tunnel 9. As a part of this development effort, a scale model of the NASA Crew Exploration Vehicle (CEV) was painted with TSP and multiple sequences of high resolution images were acquired during a five run test program. Heat transfer calculation from TSP data in Tunnel 9 is challenging due to relatively long run times, high Reynolds number environment and the desire to utilize typical stainless steel wind tunnel models used for force and moment testing. An approach to reduce TSP data into convective heat flux was developed, taking into consideration the conditions listed above. Surface temperatures from high quality quantitative global temperature maps acquired with the TSP system were then used as an input into the algorithm. Preliminary comparison of the heat flux calculated using the TSP surface temperature data with the value calculated using the standard thermocouple data is reported.

  9. An Extended EPQ-Based Problem with a Discontinuous Delivery Policy, Scrap Rate, and Random Breakdown

    PubMed Central

    Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P.

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results. PMID:25821853

  10. An extended EPQ-based problem with a discontinuous delivery policy, scrap rate, and random breakdown.

    PubMed

    Chiu, Singa Wang; Lin, Hong-Dar; Song, Ming-Syuan; Chen, Hsin-Mei; Chiu, Yuan-Shyi P

    2015-01-01

    In real supply chain environments, the discontinuous multidelivery policy is often used when finished products need to be transported to retailers or customers outside the production units. To address this real-life production-shipment situation, this study extends recent work using an economic production quantity- (EPQ-) based inventory model with a continuous inventory issuing policy, defective items, and machine breakdown by incorporating a multiple delivery policy into the model to replace the continuous policy and investigates the effect on the optimal run time decision for this specific EPQ model. Next, we further expand the scope of the problem to combine the retailer's stock holding cost into our study. This enhanced EPQ-based model can be used to reflect the situation found in contemporary manufacturing firms in which finished products are delivered to the producer's own retail stores and stocked there for sale. A second model is developed and studied. With the help of mathematical modeling and optimization techniques, the optimal run times that minimize the expected total system costs comprising costs incurred in production units, transportation, and retail stores are derived, for both models. Numerical examples are provided to demonstrate the applicability of our research results.

  11. The impact of fiscal austerity on suicide mortality: Evidence across the 'Eurozone periphery'.

    PubMed

    Antonakakis, Nikolaos; Collins, Alan

    2015-11-01

    While linkages between some macroeconomic phenomena and suicides in some countries have been explored, only two studies, hitherto, have established a causal relationship between fiscal austerity and suicide, albeit in a single country. The aim of this study is to provide the first systematic multiple-country evidence of a causal relationship of fiscal austerity on time-, gender-, and age-specific suicide mortality across five Eurozone peripheral countries, namely Greece, Ireland, Italy, Portugal and Spain over the period 1968-2012, while controlling for various socioeconomic differences. The impact of fiscal adjustments is found to be gender-, age- and time-specific. Specifically, fiscal austerity has short-, medium- and long-run suicide increasing effects on the male population in the 65-89 age group. A 1% reduction in government spending is associated with a 1.38%, 2.42% and 3.32% increase in the short-, medium- and long-run, respectively, of male suicides rates in the 65-89 age group in the Eurozone periphery. These results are highly robust to alternative measures of fiscal austerity. Improved labour market institutions help mitigate the negative effects of fiscal austerity on suicide mortality. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. 5K Run: 7-Week Training Schedule for Beginners

    MedlinePlus

    ... This 5K training schedule incorporates a mix of running, walking and resting. This combination helps reduce the ... you'll gradually increase the amount of time running and reduce the amount of time walking. If ...

  13. Effects of a minimalist shoe on running economy and 5-km running performance.

    PubMed

    Fuller, Joel T; Thewlis, Dominic; Tsiros, Margarita D; Brown, Nicholas A T; Buckley, Jonathan D

    2016-09-01

    The purpose of this study was to determine if minimalist shoes improve time trial performance of trained distance runners and if changes in running economy, shoe mass, stride length, stride rate and footfall pattern were related to any difference in performance. Twenty-six trained runners performed three 6-min sub-maximal treadmill runs at 11, 13 and 15 km·h(-1) in minimalist and conventional shoes while running economy, stride length, stride rate and footfall pattern were assessed. They then performed a 5-km time trial. In the minimalist shoe, runners completed the trial in less time (effect size 0.20 ± 0.12), were more economical during sub-maximal running (effect size 0.33 ± 0.14) and decreased stride length (effect size 0.22 ± 0.10) and increased stride rate (effect size 0.22 ± 0.11). All but one runner ran with a rearfoot footfall in the minimalist shoe. Improvements in time trial performance were associated with improvements in running economy at 15 km·h(-1) (r = 0.58), with 79% of the improved economy accounted for by reduced shoe mass (P < 0.05). The results suggest that running in minimalist shoes improves running economy and 5-km running performance.

  14. Sex-related differences in the wheel-running activity of mice decline with increasing age.

    PubMed

    Bartling, Babett; Al-Robaiy, Samiya; Lehnich, Holger; Binder, Leonore; Hiebl, Bernhard; Simm, Andreas

    2017-01-01

    Laboratory mice of both sexes having free access to running wheels are commonly used to study mechanisms underlying the beneficial effects of physical exercise on health and aging in human. However, comparative wheel-running activity profiles of male and female mice for a long period of time in which increasing age plays an additional role are unknown. Therefore, we permanently recorded the wheel-running activity (i.e., total distance, median velocity, time of breaks) of female and male mice until 9months of age. Our records indicated higher wheel-running distances for females than males which were highest in 2-month-old mice. This was mainly reached by higher running velocities of the females and not by longer running times. However, the sex-related differences declined in parallel to the age-associated reduction in wheel-running activities. Female mice also showed more variances between the weekly running distances than males, which were recorded most often for females being 4-6months old but not older. Additional records of 24-month-old mice of both sexes indicated highly reduced wheel-running activities at old age. Surprisingly, this reduction at old age resulted mainly from lower running velocities and not from shorter running times. Old mice also differed in their course of night activity which peaked later compared to younger mice. In summary, we demonstrated the influence of sex on the age-dependent activity profile of mice which is somewhat contrasting to humans, and this has to be considered when transferring exercise-mediated mechanism from mouse to human. Copyright © 2016. Published by Elsevier Inc.

  15. Bearing fault diagnosis under unknown time-varying rotational speed conditions via multiple time-frequency curve extraction

    NASA Astrophysics Data System (ADS)

    Huang, Huan; Baddour, Natalie; Liang, Ming

    2018-02-01

    Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.

  16. Determination of aliskiren in human serum quantities by HPLC-tandem mass spectrometry appropriate for pediatric trials.

    PubMed

    Burckhardt, Bjoern B; Ramusovic, Sergej; Tins, Jutta; Laeer, Stephanie

    2013-04-01

    The orally active direct renin inhibitor aliskiren is approved for the treatment of essential hypertension in adults. Analytical methods utilized in clinical studies on efficacy and safety have not been fully described in the literature but need a large sample volume ranging from 200 to 700 μL, rendering them unsuitable particularly for pediatric applications. In the assay presented only 100 μL of serum is needed for mixed-mode solid-phase extraction. The chromatographic separation was performed on Xselect(TM) C18 CSH columns with mobile phase consisting of methanol-water-formic acid (75:25:0.005, v/v/v) and a flow rate of 0.4 mL/min. Running in positive electrospray ionization and multiple reaction monitoring the mass spectrometer was set to analyze precursor ion 552.2 m/z [M + H](+) to product ion 436.2 m/z during a total run time of 5 min. The method covers a linear calibration range of 0.146-1200 ng/mL. Intra-run and inter-run precisions were 0.4-7.2 and 0.6-12.9%. Mean recovery was at least 89%. Selectivity, accuracy and stability results comply with current European Medicines Agency and Food and Drug Administration guidelines. This successfully validated LC-MS/MS method with a wide linear calibration range requiring small serum amounts is suitable for pharmacokinetic investigations of aliskiren in pediatrics, adults and the elderly. Copyright © 2012 John Wiley & Sons, Ltd.

  17. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    ERIC Educational Resources Information Center

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  18. The relationship between aerobic fitness and recovery from high-intensity exercise in infantry soldiers.

    PubMed

    Hoffman, J R

    1997-07-01

    The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.

  19. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    NASA Astrophysics Data System (ADS)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  20. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples.

    PubMed

    Kim, Stephan D; Luo, Jiajun; Buchholz, D Bruce; Chang, R P H; Grayson, M

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  1. Parallel evolution of image processing tools for multispectral imagery

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-11-01

    We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.

  2. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  3. CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM

    NASA Technical Reports Server (NTRS)

    Mccluney, K.

    1994-01-01

    In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however, a sample makefile is included. Sample input files are also included. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. This program was developed in 1992.

  4. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  5. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  6. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  7. 40 CFR Table 1 to Subpart Cccc of... - Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...

  8. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...

  9. Rollout and Turnoff (ROTO) Guidance and Information Displays: Effect on Runway Occupancy Time in Simulated Low-Visibility Landings

    NASA Technical Reports Server (NTRS)

    Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith

    2001-01-01

    This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.

  10. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.

  11. Static Stretching Alters Neuromuscular Function and Pacing Strategy, but Not Performance during a 3-Km Running Time-Trial

    PubMed Central

    Damasceno, Mayara V.; Duarte, Marcos; Pasqua, Leonardo A.; Lima-Silva, Adriano E.; MacIntosh, Brian R.; Bertuzzi, Rômulo

    2014-01-01

    Purpose Previous studies report that static stretching (SS) impairs running economy. Assuming that pacing strategy relies on rate of energy use, this study aimed to determine whether SS would modify pacing strategy and performance in a 3-km running time-trial. Methods Eleven recreational distance runners performed a) a constant-speed running test without previous SS and a maximal incremental treadmill test; b) an anthropometric assessment and a constant-speed running test with previous SS; c) a 3-km time-trial familiarization on an outdoor 400-m track; d and e) two 3-km time-trials, one with SS (experimental situation) and another without (control situation) previous static stretching. The order of the sessions d and e were randomized in a counterbalanced fashion. Sit-and-reach and drop jump tests were performed before the 3-km running time-trial in the control situation and before and after stretching exercises in the SS. Running economy, stride parameters, and electromyographic activity (EMG) of vastus medialis (VM), biceps femoris (BF) and gastrocnemius medialis (GA) were measured during the constant-speed tests. Results The overall running time did not change with condition (SS 11:35±00:31 s; control 11:28±00:41 s, p = 0.304), but the first 100 m was completed at a significantly lower velocity after SS. Surprisingly, SS did not modify the running economy, but the iEMG for the BF (+22.6%, p = 0.031), stride duration (+2.1%, p = 0.053) and range of motion (+11.1%, p = 0.0001) were significantly modified. Drop jump height decreased following SS (−9.2%, p = 0.001). Conclusion Static stretch impaired neuromuscular function, resulting in a slow start during a 3-km running time-trial, thus demonstrating the fundamental role of the neuromuscular system in the self-selected speed during the initial phase of the race. PMID:24905918

  12. Running multiple marathons is not a risk factor for premature subclinical vascular impairment.

    PubMed

    Pressler, Axel; Suchy, Christiane; Friedrichs, Tasja; Dallinger, Sophia; Grabs, Viola; Haller, Bernhard; Halle, Martin; Scherr, Johannes

    2017-08-01

    Background In contrast to the well-accepted benefits of moderate exercise, recent research has suggested potential deleterious effects of repeated marathon running on the cardiovascular system. We thus performed a comprehensive analysis of markers of subclinical vascular damage in a cohort of runners having finished multiple marathon races successfully. Design This was a prospective, observational study. Methods A total of 97 healthy male Munich marathon participants (mean age 44 ± 10 years) underwent detailed training history, cardiopulmonary exercise testing for assessment of peak oxygen uptake, ultrasound for assessment of intima-media-thickness as well as non-invasive assessments of ankle-brachial index, augmentation index, pulse wave velocity and reactive hyperaemia index. Results Runners had previously completed a median of eight (range 1-500) half marathons, six (1-100) full marathons and three (1-40) ultramarathons; mean weekly and annual training volumes were 59 ± 23 and 1639 ± 979 km. Mean peak oxygen uptake was 50 ± 8 ml/min/kg, and the Munich marathon was finished in 3:45 ± 0:32 h. Runners showed normal mean values for intima-media-thickness (0.60 ± 0.14 mm), ankle-brachial index (1.2 ± 0.1), augmentation index (17 ± 13%), pulse wave velocity (8.7 ± 1.4 cm/s) and reactive hyperaemia index (1.96 ± 0.50). Age was significantly and independently associated with intima-media-thickness ( r = 0.531; p < 0.001), augmentation index ( r = 0.593; p < 0.001) and pulse wave velocity ( r = 0.357; p < 0.001). However, no independent associations of peak oxygen uptake, marathon finishing time, number of completed races or weekly and annual training km with any of the vascular parameters were observed. Conclusions In this cohort of healthy male runners, running multiple marathon races did not pose an additional risk factor for premature subclinical vascular impairment beyond age.

  13. Label-free protein quantification using LC-coupled ion trap or FT mass spectrometry: Reproducibility, linearity, and application with complex proteomes.

    PubMed

    Wang, Guanghui; Wu, Wells W; Zeng, Weihua; Chou, Chung-Lin; Shen, Rong-Fong

    2006-05-01

    A critical step in protein biomarker discovery is the ability to contrast proteomes, a process referred generally as quantitative proteomics. While stable-isotope labeling (e.g., ICAT, 18O- or 15N-labeling, or AQUA) remains the core technology used in mass spectrometry-based proteomic quantification, increasing efforts have been directed to the label-free approach that relies on direct comparison of peptide peak areas between LC-MS runs. This latter approach is attractive to investigators for its simplicity as well as cost effectiveness. In the present study, the reproducibility and linearity of using a label-free approach to highly complex proteomes were evaluated. Various amounts of proteins from different proteomes were subjected to repeated LC-MS analyses using an ion trap or Fourier transform mass spectrometer. Highly reproducible data were obtained between replicated runs, as evidenced by nearly ideal Pearson's correlation coefficients (for ion's peak areas or retention time) and average peak area ratios. In general, more than 50% and nearly 90% of the peptide ion ratios deviated less than 10% and 20%, respectively, from the average in duplicate runs. In addition, the multiplicity ratios of the amounts of proteins used correlated nicely with the observed averaged ratios of peak areas calculated from detected peptides. Furthermore, the removal of abundant proteins from the samples led to an improvement in reproducibility and linearity. A computer program has been written to automate the processing of data sets from experiments with groups of multiple samples for statistical analysis. Algorithms for outlier-resistant mean estimation and for adjusting statistical significance threshold in multiplicity of testing were incorporated to minimize the rate of false positives. The program was applied to quantify changes in proteomes of parental and p53-deficient HCT-116 human cells and found to yield reproducible results. Overall, this study demonstrates an alternative approach that allows global quantification of differentially expressed proteins in complex proteomes. The utility of this method to biomarker discovery is likely to synergize with future improvements in the detecting sensitivity of mass spectrometers.

  14. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  15. An alternative approach to the Army Physical Fitness Test two-mile run using critical velocity and isoperformance curves.

    PubMed

    Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R

    2012-02-01

    The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.

  16. Element Verification and Comparison in Sierra/Solid Mechanics Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohashi, Yuki; Roth, William

    2016-05-01

    The goal of this project was to study the effects of element selection on the Sierra/SM solutions to five common solid mechanics problems. A total of nine element formulations were used for each problem. The models were run multiple times with varying spatial and temporal discretization in order to ensure convergence. The first four problems have been compared to analytical solutions, and all numerical results were found to be sufficiently accurate. The penetration problem was found to have a high mesh dependence in terms of element type, mesh discretization, and meshing scheme. Also, the time to solution is shown formore » each problem in order to facilitate element selection when computer resources are limited.« less

  17. On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN

    NASA Astrophysics Data System (ADS)

    Patriarchi, P.; Perinotto, M.

    The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.

  18. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  19. Sensor-scheduling simulation of disparate sensors for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Hobson, T.; Clarkson, I.

    2011-09-01

    The art and science of space situational awareness (SSA) has been practised and developed from the time of Sputnik. However, recent developments, such as the accelerating pace of satellite launch, the proliferation of launch capable agencies, both commercial and sovereign, and recent well-publicised collisions involving man-made space objects, has further magnified the importance of timely and accurate SSA. The United States Strategic Command (USSTRATCOM) operates the Space Surveillance Network (SSN), a global network of sensors tasked with maintaining SSA. The rapidly increasing number of resident space objects will require commensurate improvements in the SSN. Sensors are scarce resources that must be scheduled judiciously to obtain measurements of maximum utility. Improvements in sensor scheduling and fusion, can serve to reduce the number of additional sensors that may be required. Recently, Hill et al. [1] have proposed and developed a simulation environment named TASMAN (Tasking Autonomous Sensors in a Multiple Application Network) to enable testing of alternative scheduling strategies within a simulated multi-sensor, multi-target environment. TASMAN simulates a high-fidelity, hardware-in-the-loop system by running multiple machines with different roles in parallel. At present, TASMAN is limited to simulations involving electro-optic sensors. Its high fidelity is at once a feature and a limitation, since supercomputing is required to run simulations of appreciable scale. In this paper, we describe an alternative, modular and scalable SSA simulation system that can extend the work of Hill et al with reduced complexity, albeit also with reduced fidelity. The tool has been developed in MATLAB and therefore can be run on a very wide range of computing platforms. It can also make use of MATLAB’s parallel processing capabilities to obtain considerable speed-up. The speed and flexibility so obtained can be used to quickly test scheduling algorithms even with a relatively large number of space objects. We further describe an application of the tool by exploring how the relative mixture of electro-optical and radar sensors can impact the scheduling, fusion and achievable accuracy of an SSA system. By varying the mixture of sensor types, we are able to characterise the main advantages and disadvantages of each configuration.

  20. Multiple intensity distributions from a single optical element

    NASA Astrophysics Data System (ADS)

    Berens, Michael; Bruneton, Adrien; Bäuerle, Axel; Traub, Martin; Wester, Rolf; Stollenwerk, Jochen; Loosen, Peter

    2013-09-01

    We report on an extension of the previously published two-step freeform optics tailoring algorithm using a Monge-Kantorovich mass transportation framework. The algorithm's ability to design multiple freeform surfaces allows for the inclusion of multiple distinct light paths and hence the implementation of multiple lighting functions in a single optical element. We demonstrate the procedure in the context of automotive lighting, in which a fog lamp and a daytime running lamp are integrated in a single optical element illuminated by two distinct groups of LEDs.

  1. Comparison of Sprint and Run Times with Performance on the Wingate Anaerobic Test.

    ERIC Educational Resources Information Center

    Tharp, Gerald D.; And Others

    1985-01-01

    Male volunteers were studied to examine the relationship between the Wingate Anaerobic Test (WAnT) and sprint-run times and to determine the influence of age and weight. Results indicate the WAnT is a moderate predictor of dash and run times but becomes a stronger predictor when adjusted for body weight. (Author/MT)

  2. 12 CFR 1102.306 - Procedures for requesting records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... section; (B) Where the running of such time is suspended for the calculation of a cost estimate for the... section; (C) Where the running of such time is suspended for the payment of fees pursuant to the paragraph... of the invoice. (ix) The time limit for the ASC to respond to a request will not begin to run until...

  3. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    PubMed

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  4. Multi-Scale Human Respiratory System Simulations to Study Health Effects of Aging, Disease, and Inhaled Substances

    NASA Astrophysics Data System (ADS)

    Kunz, Robert; Haworth, Daniel; Dogan, Gulkiz; Kriete, Andres

    2006-11-01

    Three-dimensional, unsteady simulations of multiphase flow, gas exchange, and particle/aerosol deposition in the human lung are reported. Surface data for human tracheo-bronchial trees are derived from CT scans, and are used to generate three- dimensional CFD meshes for the first several generations of branching. One-dimensional meshes for the remaining generations down to the respiratory units are generated using branching algorithms based on those that have been proposed in the literature, and a zero-dimensional respiratory unit (pulmonary acinus) model is attached at the end of each terminal bronchiole. The process is automated to facilitate rapid model generation. The model is exercised through multiple breathing cycles to compute the spatial and temporal variations in flow, gas exchange, and particle/aerosol deposition. The depth of the 3D/1D transition (at branching generation n) is a key parameter, and can be varied. High-fidelity models (large n) are run on massively parallel distributed-memory clusters, and are used to generate physical insight and to calibrate/validate the 1D and 0D models. Suitably validated lower-order models (small n) can be run on single-processor PC’s with run times that allow model-based clinical intervention for individual patients.

  5. Acute differences in foot strike and spatiotemporal variables for shod, barefoot or minimalist male runners.

    PubMed

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-05-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.

  6. Acute Differences in Foot Strike and Spatiotemporal Variables for Shod, Barefoot or Minimalist Male Runners

    PubMed Central

    McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian

    2014-01-01

    This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480

  7. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  8. The NLstart2run study: Training-related factors associated with running-related injuries in novice runners.

    PubMed

    Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke

    2016-08-01

    The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  10. Rapid Alterations in Perirenal Adipose Tissue Transcriptomic Networks with Cessation of Voluntary Running

    PubMed Central

    Toedebusch, Ryan G.; Roberts, Christian K.; Roberts, Michael D.; Booth, Frank W.

    2015-01-01

    In maturing rats, the growth of abdominal fat is attenuated by voluntary wheel running. After the cessation of running by wheel locking, a rapid increase in adipose tissue growth to a size that is similar to rats that have never run (i.e. catch-up growth) has been previously reported by our lab. In contrast, diet-induced increases in adiposity have a slower onset with relatively delayed transcriptomic responses. The purpose of the present study was to identify molecular pathways associated with the rapid increase in adipose tissue after ending 6 wks of voluntary running at the time of puberty. Age-matched, male Wistar rats were given access to running wheels from 4 to 10 weeks of age. From the 10th to 11th week of age, one group of rats had continued wheel access, while the other group had one week of wheel locking. Perirenal adipose tissue was extracted, RNA sequencing was performed, and bioinformatics analyses were executed using Ingenuity Pathway Analysis (IPA). IPA was chosen to assist in the understanding of complex ‘omics data by integrating data into networks and pathways. Wheel locked rats gained significantly more fat mass and significantly increased body fat percentage between weeks 10–11 despite having decreased food intake, as compared to rats with continued wheel access. IPA identified 646 known transcripts differentially expressed (p < 0.05) between continued wheel access and wheel locking. In wheel locked rats, IPA revealed enrichment of transcripts for the following functions: extracellular matrix, macrophage infiltration, immunity, and pro-inflammatory. These findings suggest that increases in visceral adipose tissue that accompanies the cessation of pubertal physical activity are associated with the alteration of multiple pathways, some of which may potentiate the development of pubertal obesity and obesity-associated systemic low-grade inflammation that occurs later in life. PMID:26678390

  11. Rapid Alterations in Perirenal Adipose Tissue Transcriptomic Networks with Cessation of Voluntary Running.

    PubMed

    Ruegsegger, Gregory N; Company, Joseph M; Toedebusch, Ryan G; Roberts, Christian K; Roberts, Michael D; Booth, Frank W

    2015-01-01

    In maturing rats, the growth of abdominal fat is attenuated by voluntary wheel running. After the cessation of running by wheel locking, a rapid increase in adipose tissue growth to a size that is similar to rats that have never run (i.e. catch-up growth) has been previously reported by our lab. In contrast, diet-induced increases in adiposity have a slower onset with relatively delayed transcriptomic responses. The purpose of the present study was to identify molecular pathways associated with the rapid increase in adipose tissue after ending 6 wks of voluntary running at the time of puberty. Age-matched, male Wistar rats were given access to running wheels from 4 to 10 weeks of age. From the 10th to 11th week of age, one group of rats had continued wheel access, while the other group had one week of wheel locking. Perirenal adipose tissue was extracted, RNA sequencing was performed, and bioinformatics analyses were executed using Ingenuity Pathway Analysis (IPA). IPA was chosen to assist in the understanding of complex 'omics data by integrating data into networks and pathways. Wheel locked rats gained significantly more fat mass and significantly increased body fat percentage between weeks 10-11 despite having decreased food intake, as compared to rats with continued wheel access. IPA identified 646 known transcripts differentially expressed (p < 0.05) between continued wheel access and wheel locking. In wheel locked rats, IPA revealed enrichment of transcripts for the following functions: extracellular matrix, macrophage infiltration, immunity, and pro-inflammatory. These findings suggest that increases in visceral adipose tissue that accompanies the cessation of pubertal physical activity are associated with the alteration of multiple pathways, some of which may potentiate the development of pubertal obesity and obesity-associated systemic low-grade inflammation that occurs later in life.

  12. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk-run-rest mixtures.

    PubMed

    Long, Leroy L; Srinivasan, Manoj

    2013-04-06

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.

  13. The Theory of Multiple Intelligences: A Case of Missing Cognitive Matter.

    ERIC Educational Resources Information Center

    Allix, Nicholas M.

    2000-01-01

    Argues that although Gardner's conception of human cognition, characterized by a set of multiple and distinct cognitive capabilities, is an advance over the narrow conception of IQ, it runs into fundamental difficulties of a methodological kind and is based on a discredited empiricist theory of knowledge which work with artificial neural networks…

  14. Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues.

    PubMed

    Pandit, Jaideep J; Dexter, Franklin

    2009-06-01

    At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages < or = 8 h 25 min, 8 h of staffing has higher OR efficiency than 10 h for all combinations of standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min < mean < 8 h 50 min, the economic break-even point depends on conditions. For example, break-even is: (a) 8 h 27 min for Weibull, standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is < or = 8 h 40 min and to staff for 10 h otherwise, performance was poor. For example, for the Weibull distribution with mean 8 h 40 min, standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages < or = 8 h 25 min, plan 8 h staffing. If average > or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).

  15. Automated processing of fluorescence in-situ hybridization slides for HER2 testing in breast and gastro-esophageal carcinomas.

    PubMed

    Tafe, Laura J; Allen, Samantha F; Steinmetz, Heather B; Dokus, Betty A; Cook, Leanne J; Marotti, Jonathan D; Tsongalis, Gregory J

    2014-08-01

    HER2 fluorescence in-situ hybridization (FISH) is used in breast and gastro-esophageal carcinoma for determining HER2 gene amplification and patients' eligibility for HER2 targeted therapeutics. Traditional manual processing of the FISH slides is labor intensive because of multiple steps that require hands on manipulation of the slides and specifically timed intervals between steps. This highly manual processing also introduces inter-run and inter-operator variability that may affect the quality of the FISH result. Therefore, we sought to incorporate an automated processing instrument into our FISH workflow. Twenty-six cases including breast (20) and gastro-esophageal (6) cancer comprising 23 biopsies and three excision specimens were tested for HER2 FISH (Pathvysion, Abbott) using the Thermobrite Elite (TBE) system (Leica). Up to 12 slides can be run simultaneously. All cases were previously tested by the Pathvysion HER2 FISH assay with manual preparation. Twenty cells were counted by two observers for each case; five cases were tested on three separate runs by different operators to evaluate the precision and inter-operator variability. There was 100% concordance in the scoring between the manual and TBE methods as well as among the five cases that were tested on three runs. Only one case failed due to poor probe hybridization. In total, seven cases were positive for HER2 amplification (HER2:CEP17 ratio >2.2) and the remaining 19 were negative (HER2:CEP17 ratio <1.8) utilizing the 2007 ASCO/CAP scoring criteria. Due to the automated denaturation and hybridization, for each run, there was a reduction in labor of 3.5h which could then be dedicated to other lab functions. The TBE is a walk away pre- and post-hybridization system that automates FISH slide processing, improves work flow and consistency and saves approximately 3.5h of technologist time. The instrument has a small footprint thus occupying minimal counter space. TBE processed slides performed exceptionally well in comparison to the manual technique with no disagreement in HER2 amplification status. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Operating system for a real-time multiprocessor propulsion system simulator. User's manual

    NASA Technical Reports Server (NTRS)

    Cole, G. L.

    1985-01-01

    The NASA Lewis Research Center is developing and evaluating experimental hardware and software systems to help meet future needs for real-time, high-fidelity simulations of air-breathing propulsion systems. Specifically, the real-time multiprocessor simulator project focuses on the use of multiple microprocessors to achieve the required computing speed and accuracy at relatively low cost. Operating systems for such hardware configurations are generally not available. A real time multiprocessor operating system (RTMPOS) that supports a variety of multiprocessor configurations was developed at Lewis. With some modification, RTMPOS can also support various microprocessors. RTMPOS, by means of menus and prompts, provides the user with a versatile, user-friendly environment for interactively loading, running, and obtaining results from a multiprocessor-based simulator. The menu functions are described and an example simulation session is included to demonstrate the steps required to go from the simulation loading phase to the execution phase.

  17. Automated Flight Dynamics Product Generation for the EOS AM-1 Spacecraft

    NASA Technical Reports Server (NTRS)

    Matusow, Carla

    1999-01-01

    As part of NASA's Earth Science Enterprise, the Earth Observing System (EOS) AM-1 spacecraft is designed to monitor long-term, global, environmental changes. Because of the complexity of the AM-1 spacecraft, the mission operations center requires more than 80 distinct flight dynamics products (reports). To create these products, the AM-1 Flight Dynamics Team (FDT) will use a combination of modified commercial software packages (e.g., Analytical Graphic's Satellite ToolKit) and NASA-developed software applications. While providing the most cost-effective solution to meeting the mission requirements, the integration of these software applications raises several operational concerns: (1) Routine product generation requires knowledge of multiple applications executing on variety of hardware platforms. (2) Generating products is a highly interactive process requiring a user to interact with each application multiple times to generate each product. (3) Routine product generation requires several hours to complete. (4) User interaction with each application introduces the potential for errors, since users are required to manually enter filenames and input parameters as well as run applications in the correct sequence. Generating products requires some level of flight dynamics expertise to determine the appropriate inputs and sequencing. To address these issues, the FDT developed an automation software tool called AutoProducts, which runs on a single hardware platform and provides all necessary coordination and communication among the various flight dynamics software applications. AutoProducts, autonomously retrieves necessary files, sequences and executes applications with correct input parameters, and deliver the final flight dynamics products to the appropriate customers. Although AutoProducts will normally generate pre-programmed sets of routine products, its graphical interface allows for easy configuration of customized and one-of-a-kind products. Additionally, AutoProducts has been designed as a mission-independent tool, and can be easily reconfigured to support other missions or incorporate new flight dynamics software packages. After the AM-1 launch, AutoProducts will run automatically at pre-determined time intervals . The AutoProducts tool reduces many of the concerns associated with the flight dynamics product generation. Although AutoProducts required a significant effort to develop because of the complexity of the interfaces involved, its use will provide significant cost savings through reduced operator time and maximum product reliability. In addition, user satisfaction is significantly improved and flight dynamics experts have more time to perform valuable analysis work. This paper will describe the evolution of the AutoProducts tool, highlighting the cost savings and customer satisfaction resulting from its development. It will also provide details about the tool including its graphical interface and operational capabilities.

  18. A Monotonic Degradation Assessment Index of Rolling Bearings Using Fuzzy Support Vector Data Description and Running Time

    PubMed Central

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε̄ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε̄ describes the accelerating relationships between the damage development and running time. However, the index ε̄ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε̄ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly. PMID:23112591

  19. A monotonic degradation assessment index of rolling bearings using fuzzy support vector data description and running time.

    PubMed

    Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen

    2012-01-01

    Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε⁻ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε⁻ describes the accelerating relationships between the damage development and running time. However, the index ε⁻ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε⁻ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly.

  20. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA)

    NASA Technical Reports Server (NTRS)

    Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff

    2016-01-01

    The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.

  1. Time takes space: selective effects of multitasking on concurrent spatial processing.

    PubMed

    Mäntylä, Timo; Coni, Valentina; Kubik, Veit; Todorov, Ivo; Del Missier, Fabio

    2017-08-01

    Many everyday activities require coordination and monitoring of complex relations of future goals and deadlines. Cognitive offloading may provide an efficient strategy for reducing control demands by representing future goals and deadlines as a pattern of spatial relations. We tested the hypothesis that multiple-task monitoring involves time-to-space transformational processes, and that these spatial effects are selective with greater demands on coordinate (metric) than categorical (nonmetric) spatial relation processing. Participants completed a multitasking session in which they monitored four series of deadlines, running on different time scales, while making concurrent coordinate or categorical spatial judgments. We expected and found that multitasking taxes concurrent coordinate, but not categorical, spatial processing. Furthermore, males showed a better multitasking performance than females. These findings provide novel experimental evidence for the hypothesis that efficient multitasking involves metric relational processing.

  2. Joint detection and localization of multiple anatomical landmarks through learning

    NASA Astrophysics Data System (ADS)

    Dikmen, Mert; Zhan, Yiqiang; Zhou, Xiang Sean

    2008-03-01

    Reliable landmark detection in medical images provides the essential groundwork for successful automation of various open problems such as localization, segmentation, and registration of anatomical structures. In this paper, we present a learning-based system to jointly detect (is it there?) and localize (where?) multiple anatomical landmarks in medical images. The contributions of this work exist in two aspects. First, this method takes the advantage from the learning scenario that is able to automatically extract the most distinctive features for multi-landmark detection. Therefore, it is easily adaptable to detect arbitrary landmarks in various kinds of imaging modalities, e.g., CT, MRI and PET. Second, the use of multi-class/cascaded classifier architecture in different phases of the detection stage combined with robust features that are highly efficient in terms of computation time enables a seemingly real time performance, with very high localization accuracy. This method is validated on CT scans of different body sections, e.g., whole body scans, chest scans and abdominal scans. Aside from improved robustness (due to the exploitation of spatial correlations), it gains a run time efficiency in landmark detection. It also shows good scalability performance under increasing number of landmarks.

  3. The LSST Scheduler from design to construction

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Reuter, Michael A.

    2016-07-01

    The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.

  4. 77 FR 50198 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-20

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...

  5. Towards Run-time Assurance of Advanced Propulsion Algorithms

    NASA Technical Reports Server (NTRS)

    Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy

    2014-01-01

    This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.

  6. Comparing Energy Expenditure in Adolescents With and Without Autism While Playing Nintendo(®) Wii(™) Games.

    PubMed

    Getchell, Nancy; Miccinello, Dannielle; Blom, Michelle; Morris, Lyssa; Szaroleta, Mark

    2012-02-01

    Obesity rates are on the rise in individuals with autism spectrum disorders (ASD), which underscores the importance of finding new ways in which to engage this population in physical activity. We wanted to explore the energetic expenditure of adolescents with and without ASD while playing Nintendo(®) Wii(™) (Nintendo of America, Inc., Redmond, WA) games compared with more traditional exercise modalities. Specifically, we wanted to compare energy expenditure (EE) among the different activities and to see which activities lead to the greatest amount of time classified as "moderate to vigorous." Two groups of adolescents (15 with ASD [mean age, 17.50±2.4 years], 15 without ASD [mean age, 17.23±4.1 years]) participated in 20-minute bouts of walking, running, and playing Nintendo Wii "Sport(™)," Wii "Fit(™)," and "Dance Dance Revolution" (DDR) (Konami Digital Entertainment, Inc., El Segundo, CA). During each session, EE was calculated using an Actical (Mini Mitter Co., Bend, OR) accelerometer. Groups were compared using multiple t tests. Both groups expended similar amounts of kilcalories in all activities, except for Wii Fit, in which the ASD group expended significantly more kilocalories. For the ASD group, EE was greatest in running, followed by walking, DDR, Wii Fit, and Wii Sport. Walking, running, and DDR all had at least 75 percent of the total time spent in moderate to vigorous intensity levels. We suggest videogame systems, such as the Nintendo Wii, may be viable alternative for individuals with ASD to increase their daily physical activity and help alleviate the growing rates of obesity.

  7. Virus elimination during the purification of monoclonal antibodies by column chromatography and additional steps.

    PubMed

    Roberts, Peter L

    2014-01-01

    The theoretical potential for virus transmission by monoclonal antibody based therapeutic products has led to the inclusion of appropriate virus reduction steps. In this study, virus elimination by the chromatographic steps used during the purification process for two (IgG-1 & -3) monoclonal antibodies (MAbs) have been investigated. Both the Protein G (>7log) and ion-exchange (5 log) chromatography steps were very effective for eliminating both enveloped and non-enveloped viruses over the life-time of the chromatographic gel. However, the contribution made by the final gel filtration step was more limited, i.e., 3 log. Because these chromatographic columns were recycled between uses, the effectiveness of the column sanitization procedures (guanidinium chloride for protein G or NaOH for ion-exchange) were tested. By evaluating standard column runs immediately after each virus spiked run, it was possible to directly confirm that there was no cross contamination with virus between column runs (guanidinium chloride or NaOH). To further ensure the virus safety of the product, two specific virus elimination steps have also been included in the process. A solvent/detergent step based on 1% triton X-100 rapidly inactivating a range of enveloped viruses by >6 log inactivation within 1 min of a 60 min treatment time. Virus removal by virus filtration step was also confirmed to be effective for those viruses of about 50 nm or greater. In conclusion, the combination of these multiple steps ensures a high margin of virus safety for this purification process. © 2014 American Institute of Chemical Engineers.

  8. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  9. LUXSim: A component-centric approach to low-background simulations

    DOE PAGES

    Akerib, D. S.; Bai, X.; Bedikian, S.; ...

    2012-02-13

    Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less

  10. Multifractal analysis of a GCM climate

    NASA Astrophysics Data System (ADS)

    Carl, P.

    2003-04-01

    Multifractal analysis using the Wavelet Transform Modulus Maxima (WTMM) approach is being applied to the climate of a Mintz--Arakawa type, coarse resolution, two--layer AGCM. The model shows a backwards running period multiplication scenario throughout the northern summer, subsequent to a 'hard', subcritical Hopf bifurcation late in spring. This 'route out of chaos' (seen in cross sections of a toroidal phase space structure) is born in the planetary monsoon system which inflates the seasonal 'cycle' into these higher order structures and is blamed for the pronounced intraseasonal--to--centennial model climate variability. Previous analyses of the latter using advanced modal decompositions showed regularity based patterns in the time--frequency plane which are qualitatively similar to those obtained from the real world. The closer look here at the singularity structures, as a fundamental diagnostic supplement, aims at both more complete understanding (and quantification) of the model's qualitative dynamics and search for further tools of model intercomparison and verification in this respect. Analysing wavelet is the 10th derivative of the Gaussian which might suffice to suppress regular patterns in the data. Intraseasonal attractors, studied in time series of model precipitation over Central India, show shifting and braodening singularity spectra towards both more violent extreme events (premonsoon--monsoon transition) and weaker events (late summer to postmonsoon transition). Hints at a fractal basin boundary are found close to transition from period--2 to period--1 in the monsoon activity cycle. Interannual analyses are provided for runs with varied solar constants. To address the (in--)stationarity issue, first results are presented with a windowed multifractal analysis of longer--term runs ("singularity spectrogram").

  11. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  12. 40 CFR Table 1b to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  14. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis.

    PubMed

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.

  15. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis

    PubMed Central

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762

  16. A dilute-and-shoot flow-injection tandem mass spectrometry method for quantification of phenobarbital in urine.

    PubMed

    Alagandula, Ravali; Zhou, Xiang; Guo, Baochuan

    2017-01-15

    Liquid chromatography/tandem mass spectrometry (LC/MS/MS) is the gold standard of urine drug testing. However, current LC-based methods are time consuming, limiting the throughput of MS-based testing and increasing the cost. This is particularly problematic for quantification of drugs such as phenobarbital, which is often analyzed in a separate run because they must be negatively ionized. This study examined the feasibility of using a dilute-and-shoot flow-injection method without LC separation to quantify drugs with phenobarbital as a model system. Briefly, a urine sample containing phenobarbital was first diluted by 10 times, followed by flow injection of the diluted sample to mass spectrometer. Quantification and detection of phenobarbital were achieved by an electrospray negative ionization MS/MS system operated in the multiple reaction monitoring (MRM) mode with the stable-isotope-labeled drug as internal standard. The dilute-and-shoot flow-injection method developed was linear with a dynamic range of 50-2000 ng/mL of phenobarbital and correlation coefficient > 0.9996. The coefficients of variation and relative errors for intra- and inter-assays at four quality control (QC) levels (50, 125, 445 and 1600 ng/mL) were 3.0% and 5.0%, respectively. The total run time to quantify one sample was 2 min, and the sensitivity and specificity of the method did not deteriorate even after 1200 consecutive injections. Our method can accurately and robustly quantify phenobarbital in urine without LC separation. Because of its 2 min run time, the method can process 720 samples per day. This feasibility study shows that the dilute-and-shoot flow-injection method can be a general way for fast analysis of drugs in urine. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  18. Software for Allocating Resources in the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester; Zendejas, Silvino; Baldwin, John

    2003-01-01

    TIGRAS 2.0 is a computer program designed to satisfy a need for improved means for analyzing the tracking demands of interplanetary space-flight missions upon the set of ground antenna resources of the Deep Space Network (DSN) and for allocating those resources. Written in Microsoft Visual C++, TIGRAS 2.0 provides a single rich graphical analysis environment for use by diverse DSN personnel, by connecting to various data sources (relational databases or files) based on the stages of the analyses being performed. Notable among the algorithms implemented by TIGRAS 2.0 are a DSN antenna-load-forecasting algorithm and a conflict-aware DSN schedule-generating algorithm. Computers running TIGRAS 2.0 can also be connected using SOAP/XML to a Web services server that provides analysis services via the World Wide Web. TIGRAS 2.0 supports multiple windows and multiple panes in each window for users to view and use information, all in the same environment, to eliminate repeated switching among various application programs and Web pages. TIGRAS 2.0 enables the use of multiple windows for various requirements, trajectory-based time intervals during which spacecraft are viewable, ground resources, forecasts, and schedules. Each window includes a time navigation pane, a selection pane, a graphical display pane, a list pane, and a statistics pane.

  19. Architecture Adaptive Computing Environment

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    2006-01-01

    Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.

  20. AntiClustal: Multiple Sequence Alignment by antipole clustering and linear approximate 1-median computation.

    PubMed

    Di Pietro, C; Di Pietro, V; Emmanuele, G; Ferro, A; Maugeri, T; Modica, E; Pigola, G; Pulvirenti, A; Purrello, M; Ragusa, M; Scalia, M; Shasha, D; Travali, S; Zimmitti, V

    2003-01-01

    In this paper we present a new Multiple Sequence Alignment (MSA) algorithm called AntiClusAl. The method makes use of the commonly use idea of aligning homologous sequences belonging to classes generated by some clustering algorithm, and then continue the alignment process ina bottom-up way along a suitable tree structure. The final result is then read at the root of the tree. Multiple sequence alignment in each cluster makes use of the progressive alignment with the 1-median (center) of the cluster. The 1-median of set S of sequences is the element of S which minimizes the average distance from any other sequence in S. Its exact computation requires quadratic time. The basic idea of our proposed algorithm is to make use of a simple and natural algorithmic technique based on randomized tournaments which has been successfully applied to large size search problems in general metric spaces. In particular a clustering algorithm called Antipole tree and an approximate linear 1-median computation are used. Our algorithm compared with Clustal W, a widely used tool to MSA, shows a better running time results with fully comparable alignment quality. A successful biological application showing high aminoacid conservation during evolution of Xenopus laevis SOD2 is also cited.

  1. BrightStat.com: free statistics online.

    PubMed

    Stricker, Daniel

    2008-10-01

    Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.

  2. Agricultural Airplane Mission Time Structure Characteristics

    NASA Technical Reports Server (NTRS)

    Jewel, J. W., Jr.

    1982-01-01

    The time structure characteristics of agricultural airplane missions were studied by using records from NASA VGH flight recorders. Flight times varied from less than 3 minutes to more than 103 minutes. There was a significant reduction in turning time between spreading runs as pilot experience in the airplane type increased. Spreading runs accounted for only 25 to 29 percent of the flight time of an agricultural airplane. Lowering the longitudinal stick force appeared to reduce both the turning time between spreading runs and pilot fatigue at the end of a working day.

  3. Numerical simulation of the effects of urban land-use changes on the local climate of multiple desert cities

    NASA Astrophysics Data System (ADS)

    Kamal, S. M.; Huang, H. P.; Myint, S. W.

    2016-12-01

    This study quantifies the effect of urbanization on local climate by numerical simulations for multiple desert cities with a wide range of urban size, baseline climatology, and composition of land cover. The numerical experiments use the Weather Research and Forecasting (WRF) model with multiple layers of nesting centered at a desert city. To extract the influence of land-use changes, twin runs are performed with each pair driven by the same time-varying lateral boundary conditions from reanalysis but different land surface conditions from Landsat observations for 1985 and 2010. The differences in the meteorological fields between the two runs are interpreted as the effects of land-use changes due to urbanization from 1985-2010. Using this strategy, simulations are carried out for five desert cities: (1) Las Vegas, United States, (2) Hotan, China, (3) Kharga, Egypt, (4) Beer Sheva, Israel, and (5) Jodhpur, India. The results of the simulations reveal a common pattern of the climatic effect of desert urbanization with nighttime warming but daytime cooling over areas where urbanization occurred. This effect is mainly confined to the urban area and is not sensitive to the size of the city or the detail of land cover in the surrounding non-urban areas. The pattern is similar in winter and summer. Exceptions to this pattern are found in a few cases in which the noisiness of local circulation, specifically monsoon and land-sea breeze, overwhelms the climatic signal induced by land-use changes. Although the local climatic responses to urbanization are qualitatively similar for the five desert cities, quantitative differences exist in the magnitudes of nighttime warming and daytime cooling. The possible reasons for those secondary differences are discussed.

  4. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitney, S.E.

    This paper highlights the use of the CAPE-OPEN (CO) standard interfaces in the Advanced Process Engineering Co-Simulator (APECS) developed at the National Energy Technology Laboratory (NETL). The APECS system uses the CO unit operation, thermodynamic, and reaction interfaces to provide its plug-and-play co-simulation capabilities, including the integration of process simulation with computational fluid dynamics (CFD) simulation. APECS also relies heavily on the use of a CO COM/CORBA bridge for running process/CFD co-simulations on multiple operating systems. For process optimization in the face of multiple and some time conflicting objectives, APECS offers stochastic modeling and multi-objective optimization capabilities developed to complymore » with the CO software standard. At NETL, system analysts are applying APECS to a wide variety of advanced power generation systems, ranging from small fuel cell systems to commercial-scale power plants including the coal-fired, gasification-based FutureGen power and hydrogen production plant.« less

  6. Analyzing multiple data sets by interconnecting RSAT programs via SOAP Web services: an example with ChIP-chip data.

    PubMed

    Sand, Olivier; Thomas-Chollier, Morgane; Vervisch, Eric; van Helden, Jacques

    2008-01-01

    This protocol shows how to access the Regulatory Sequence Analysis Tools (RSAT) via a programmatic interface in order to automate the analysis of multiple data sets. We describe the steps for writing a Perl client that connects to the RSAT Web services and implements a workflow to discover putative cis-acting elements in promoters of gene clusters. In the presented example, we apply this workflow to lists of transcription factor target genes resulting from ChIP-chip experiments. For each factor, the protocol predicts the binding motifs by detecting significantly overrepresented hexanucleotides in the target promoters and generates a feature map that displays the positions of putative binding sites along the promoter sequences. This protocol is addressed to bioinformaticians and biologists with programming skills (notions of Perl). Running time is approximately 6 min on the example data set.

  7. Reduction and Analysis of GALFACTS Data in Search of Compact Variable Sources

    NASA Astrophysics Data System (ADS)

    Wenger, Trey; Barenfeld, S.; Ghosh, T.; Salter, C.

    2012-01-01

    The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo sky, full-Stokes survey from 1225 to 1525 MHz using the multibeam Arecibo L-band Feed Array (ALFA). Using data from survey field N1, the first field covered by GALFACTS, we are searching for compact sources that vary in intensity and/or polarization. The multistep procedure for reducing the data includes radio frequency interference (RFI) removal, source detection, Gaussian fitting in multiple dimensions, polarization leakage calibration, and gain calibration. We have developed code to analyze and calculate the calibration parameters from the N1 calibration sources, and apply these to the data of the main run. For detected compact sources, our goal is to compare results from multiple passes over a source to search for rapid variability, as well as to compare our flux densities with those from the NRAO VLA Sky Survey (NVSS) to search for longer time-scale variations.

  8. MatLab Script and Functional Programming

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali

    2007-01-01

    MatLab Script and Functional Programming: MatLab is one of the most widely used very high level programming languages for scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. The MatLab seminar covers the functional and script programming aspect of MatLab language. Specific expectations are: a) Recognize MatLab commands, script and function. b) Create, and run a MatLab function. c) Read, recognize, and describe MatLab syntax. d) Recognize decisions, loops and matrix operators. e) Evaluate scope among multiple files, and multiple functions within a file. f) Declare, define and use scalar variables, vectors and matrices.

  9. Simulation and analysis of support hardware for multiple instruction rollback

    NASA Technical Reports Server (NTRS)

    Alewine, Neil J.

    1992-01-01

    Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.

  10. A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas

    NASA Astrophysics Data System (ADS)

    Gudehus, D. H.

    2001-05-01

    A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.

  11. Fast algorithm for spectral processing with application to on-line welding quality assurance

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  12. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  13. The reliability and validity of fatigue measures during multiple-sprint work: an issue revisited.

    PubMed

    Glaister, Mark; Howatson, Glyn; Pattison, John R; McInnes, Gill

    2008-09-01

    The ability to repeatedly produce a high-power output or sprint speed is a key fitness component of most field and court sports. The aim of this study was to evaluate the validity and reliability of eight different approaches to quantify this parameter in tests of multiple-sprint performance. Ten physically active men completed two trials of each of two multiple-sprint running protocols with contrasting recovery periods. Protocol 1 consisted of 12 x 30-m sprints repeated every 35 seconds; protocol 2 consisted of 12 x 30-m sprints repeated every 65 seconds. All testing was performed in an indoor sports facility, and sprint times were recorded using twin-beam photocells. All but one of the formulae showed good construct validity, as evidenced by similar within-protocol fatigue scores. However, the assumptions on which many of the formulae were based, combined with poor or inconsistent test-retest reliability (coefficient of variation range: 0.8-145.7%; intraclass correlation coefficient range: 0.09-0.75), suggested many problems regarding logical validity. In line with previous research, the results support the percentage decrement calculation as the most valid and reliable method of quantifying fatigue in tests of multiple-sprint performance.

  14. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  15. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  16. Data Triage

    DTIC Science & Technology

    2007-06-01

    particle accelerators cannot run unless enough network band- width is available to absorb their data streams. DOE scientists running simulations routinely...send tuples to TelegraphCQ. To simulate a less-powerful machine, I increased the playback rate of the trace by a factor of 10 and reduced the query...III CPUs and 1.5 GB of main memory. To simulate using a less powerful embedded CPU, I wrote a program that would “play back” the trace at a multiple

  17. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  18. 76 FR 13683 - Self-Regulatory Organizations; The Fixed Income Clearing Corporation; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-14

    ... To Move the Time at Which It Runs Its Daily Morning Pass March 8, 2011. Pursuant to Section 19(b)(1... Backed Securities Division (``MBSD'') intends to move the time at which it runs its daily morning pass... notify participants that MBSD intends to move the time at which it runs its daily morning pass from 10:30...

  19. Mechanics and energetics of human locomotion on sand.

    PubMed

    Lejeune, T M; Willems, P A; Heglund, N C

    1998-07-01

    Moving about in nature often involves walking or running on a soft yielding substratum such as sand, which has a profound effect on the mechanics and energetics of locomotion. Force platform and cinematographic analyses were used to determine the mechanical work performed by human subjects during walking and running on sand and on a hard surface. Oxygen consumption was used to determine the energetic cost of walking and running under the same conditions. Walking on sand requires 1.6-2.5 times more mechanical work than does walking on a hard surface at the same speed. In contrast, running on sand requires only 1.15 times more mechanical work than does running on a hard surface at the same speed. Walking on sand requires 2.1-2.7 times more energy expenditure than does walking on a hard surface at the same speed; while running on sand requires 1.6 times more energy expenditure than does running on a hard surface. The increase in energy cost is due primarily to two effects: the mechanical work done on the sand, and a decrease in the efficiency of positive work done by the muscles and tendons.

  20. Isocapnic hyperpnea training improves performance in competitive male runners.

    PubMed

    Leddy, John J; Limprasertkul, Atcharaporn; Patel, Snehal; Modlich, Frank; Buyea, Cathy; Pendergast, David R; Lundgren, Claes E G

    2007-04-01

    The effects of voluntary isocapnic hyperpnea (VIH) training (10 h over 4 weeks, 30 min/day) on ventilatory system and running performance were studied in 15 male competitive runners, 8 of whom trained twice weekly for 3 more months. Control subjects (n = 7) performed sham-VIH. Vital capacity (VC), FEV1, maximum voluntary ventilation (MVV), maximal inspiratory and expiratory mouth pressures, VO2max, 4-mile run time, treadmill run time to exhaustion at 80% VO2max, serum lactate, total ventilation (V(E)), oxygen consumption (VO2) oxygen saturation and cardiac output were measured before and after 4 weeks of VIH. Respiratory parameters and 4-mile run time were measured monthly during the 3-month maintenance period. There were no significant changes in post-VIH VC and FEV1 but MVV improved significantly (+10%). Maximal inspiratory and expiratory mouth pressures, arterial oxygen saturation and cardiac output did not change post-VIH. Respiratory and running performances were better 7- versus 1 day after VIH. Seven days post-VIH, respiratory endurance (+208%) and treadmill run time (+50%) increased significantly accompanied by significant reductions in respiratory frequency (-6%), V(E) (-7%), VO2 (-6%) and lactate (-18%) during the treadmill run. Post-VIH 4-mile run time did not improve in the control group whereas it improved in the experimental group (-4%) and remained improved over a 3 month period of reduced VIH frequency. The improvements cannot be ascribed to improved blood oxygen delivery to muscle or to psychological factors.

  1. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  2. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  3. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  4. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  5. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  6. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    PubMed

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  7. Science advancements key to increasing management value of life stage monitoring networks for endangered Sacramento River winter-run Chinook salmon in California

    USGS Publications Warehouse

    Johnson, Rachel C.; Windell, Sean; Brandes, Patricia L.; Conrad, J. Louise; Ferguson, John; Goertler, Pascale A. L.; Harvey, Brett N.; Heublein, Joseph; Isreal, Joshua A.; Kratville, Daniel W.; Kirsch, Joseph E.; Perry, Russell W.; Pisciotto, Joseph; Poytress, William R.; Reece, Kevin; Swart, Brycen G.

    2017-01-01

    A robust monitoring network that provides quantitative information about the status of imperiled species at key life stages and geographic locations over time is fundamental for sustainable management of fisheries resources. For anadromous species, management actions in one geographic domain can substantially affect abundance of subsequent life stages that span broad geographic regions. Quantitative metrics (e.g., abundance, movement, survival, life history diversity, and condition) at multiple life stages are needed to inform how management actions (e.g., hatcheries, harvest, hydrology, and habitat restoration) influence salmon population dynamics. The existing monitoring network for endangered Sacramento River winterrun Chinook Salmon (SRWRC, Oncorhynchus tshawytscha) in California’s Central Valley was compared to conceptual models developed for each life stage and geographic region of the life cycle to identify relevant SRWRC metrics. We concluded that the current monitoring network was insufficient to diagnose when (life stage) and where (geographic domain) chronic or episodic reductions in SRWRC cohorts occur, precluding within- and among-year comparisons. The strongest quantitative data exist in the Upper Sacramento River, where abundance estimates are generated for adult spawners and emigrating juveniles. However, once SRWRC leave the upper river, our knowledge of their identity, abundance, and condition diminishes, despite the juvenile monitoring enterprise. We identified six system-wide recommended actions to strengthen the value of data generated from the existing monitoring network to assess resource management actions: (1) incorporate genetic run identification; (2) develop juvenile abundance estimates; (3) collect data for life history diversity metrics at multiple life stages; (4) expand and enhance real-time fish survival and movement monitoring; (5) collect fish condition data; and (6) provide timely public access to monitoring data in open data formats. To illustrate how updated technologies can enhance the existing monitoring to provide quantitative data on SRWRC, we provide examples of how each recommendation can address specific management issues.

  8. Velocity changes, long runs, and reversals in the Chromatium minus swimming response.

    PubMed Central

    Mitchell, J G; Martinez-Alonso, M; Lalucat, J; Esteve, I; Brown, S

    1991-01-01

    The velocity, run time, path curvature, and reorientation angle of Chromatium minus were measured as a function of light intensity, temperature, viscosity, osmotic pressure, and hydrogen sulfide concentration. C. minus changed both velocity and run time. Velocity decreased with increasing light intensity in sulfide-depleted cultures and increased in sulfide-replete cultures. The addition of sulfide to cultures grown at low light intensity (10 microeinsteins m-2 s-1) caused mean run times to increase from 10.5 to 20.6 s. The addition of sulfide to cultures grown at high light intensity (100 microeinsteins m-2 s-1) caused mean run times to decrease from 15.3 to 7.7 s. These changes were maintained for up to an hour and indicate that at least some members of the family Chromatiaceae simultaneously modulate velocity and turning frequency for extended periods as part of normal taxis. Images PMID:1991736

  9. Run-of-river power plants in Alpine regions: Whither optimal capacity?

    NASA Astrophysics Data System (ADS)

    Lazzaro, G.; Botter, G.

    2015-07-01

    Although run-of-river hydropower represents a key source of renewable energy, it cannot prevent stresses on river ecosystems and human well-being. This is especially true in Alpine regions, where the outflow of a plant is placed several kilometers downstream of the intake, inducing the depletion of river reaches of considerable length. Here multiobjective optimization is used in the design of the capacity of run-of-river plants to identify optimal trade-offs between two contrasting objectives: the maximization of the profitability and the minimization of the hydrologic disturbance between the intake and the outflow. The latter is evaluated considering different flow metrics: mean discharge, temporal autocorrelation, and streamflow variability. Efficient and Pareto-optimal plant sizes are devised for two representative case studies belonging to the Piave river (Italy). Our results show that the optimal design capacity is strongly affected by the flow regime at the plant intake. In persistent regimes with a reduced flow variability, the optimal trade-off between economic exploitation and hydrologic disturbance is obtained for a narrow range of capacities sensibly smaller than the economic optimum. In erratic regimes featured by an enhanced flow variability, instead, the Pareto front is discontinuous and multiple trade-offs can be identified, which imply either smaller or larger plants compared to the economic optimum. In particular, large capacities reduce the impact of the plant on the streamflow variability at seasonal and interannual time scale. Multiobjective analysis could provide a clue for the development of policy actions based on the evaluation of the environmental footprint of run-of-river plants.

  10. DNA motif alignment by evolving a population of Markov chains.

    PubMed

    Bi, Chengpeng

    2009-01-30

    Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.

  11. Connecting an Ocean-Bottom Broadband Seismometer to a Seafloor Cabled Observatory: A Prototype System in Monterey Bay

    NASA Astrophysics Data System (ADS)

    McGill, P.; Neuhauser, D.; Romanowicz, B.

    2008-12-01

    The Monterey Ocean-Bottom Broadband (MOBB) seismic station was installed in April 2003, 40 km offshore from the central coast of California at a seafloor depth of 1000 m. It comprises a three-component broadband seismometer system (Guralp CMG-1T), installed in a hollow PVC caisson and buried under the seafloor; a current meter; and a differential pressure gauge. The station has been operating continuously since installation with no connection to the shore. Three times each year, the station is serviced with the aid of a Remotely Operated Vehicle (ROV) to change the batteries and retrieve the seismic data. In February 2009, the MOBB system will be connected to the Monterey Accelerated Research System (MARS) seafloor cabled observatory. The NSF-funded MARS observatory comprises a 52 km electro-optical cable that extends from a shore facility in Moss Landing out to a seafloor node in Monterey Bay. Once installation is completed in November 2008, the node will provide power and data to as many as eight science experiments through underwater electrical connectors. The MOBB system is located 3 km from the MARS node, and the two will be connected with an extension cable installed by an ROV with the aid of a cable-laying toolsled. The electronics module in the MOBB system is being refurbished to support the connection to the MARS observatory. The low-power autonomous data logger has been replaced with a PC/104 computer stack running embedded Linux. This new computer will run an Object Ring Buffer (ORB), which will collect data from the various MOBB sensors and forward it to another ORB running on a computer at the MARS shore station. There, the data will be archived and then forwarded to a third ORB running at the UC Berkeley Seismological Laboratory. Timing will be synchronized among MOBB's multiple acquisition systems using NTP, GPS clock emulation, and a precise timing signal from the MARS cable. The connection to the MARS observatory will provide real-time access to the MOBB data and eliminate the need for frequent servicing visits. The new system uses off-the-shelf hardware and open-source software, and will serve as a prototype for future instruments connected to seafloor cabled observatories.

  12. Multiple stress response of lowland stream benthic macroinvertebrates depends on habitat type.

    PubMed

    Graeber, Daniel; Jensen, Tinna M; Rasmussen, Jes J; Riis, Tenna; Wiberg-Larsen, Peter; Baattrup-Pedersen, Annette

    2017-12-01

    Worldwide, lowland stream ecosystems are exposed to multiple anthropogenic stress due to the combination of water scarcity, eutrophication, and fine sedimentation. The understanding of the effects of such multiple stress on stream benthic macroinvertebrates has been growing in recent years. However, the interdependence of multiple stress and stream habitat characteristics has received little attention, although single stressor studies indicate that habitat characteristics may be decisive in shaping the macroinvertebrate response. We conducted an experiment in large outdoor flumes to assess the effects of low flow, fine sedimentation, and nutrient enrichment on the structure of the benthic macroinvertebrate community in riffle and run habitats of lowland streams. For most taxa, we found a negative effect of low flow on macroinvertebrate abundance in the riffle habitat, an effect which was mitigated by fine sedimentation for overall community composition and the dominant shredder species (Gammarus pulex) and by nutrient enrichment for the dominant grazer species (Baetis rhodani). In contrast, fine sediment in combination with low flow rapidly affected macroinvertebrate composition in the run habitat, with decreasing abundances of many species. We conclude that the effects of typical multiple stressor scenarios on lowland stream benthic macroinvertebrates are highly dependent on habitat conditions and that high habitat diversity needs to be given priority by stream managers to maximize the resilience of stream macroinvertebrate communities to multiple stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Relationship between 1.5-mile run time, injury risk and training outcome in British Army recruits.

    PubMed

    Hall, Lianne J

    2017-12-01

    1.5-mile run time, as a surrogate measure of aerobic fitness, is associated with musculoskeletal injury (MSI) risk in military recruits. This study aimed to determine if 1.5-mile run times can predict injury risk and attrition rates from phase 1 (initial) training and determine if a link exists between phase 1 and 2 discharge outcomes in British Army recruits. 1.5-mile times from week 1 of initial training and MSI reported during training were retrieved for 3446 male recruits. Run times were examined against injury occurrence and training outcomes for 3050 recruits, using a Binary Logistic Regression and χ 2 analysis. The 1.5-mile run can predict injury risk and phase 1 attrition rates (χ 2 (1)=59.3 p<0.001, χ 2 (1)=66.873 p<0.001). Slower 1.5-mile run times were associated with higher injury occurrence (χ 2 (1)=59.3 p<0.001) and reduced phase 1 ( χ 2 104.609 a p<0.001) and 2 (χ 2 84.978 a p<0.001) success. The 1.5-mile run can be used to guide a future standard that will in turn help reduce injury occurrence and improve training success. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. Development of an impulsive noise source to study the acoustic reflection characteristics of hard-walled wind tunnels

    NASA Technical Reports Server (NTRS)

    Salikuddin, M.; Burrin, R. H.; Ahuja, K. K.; Bartel, H. W.

    1986-01-01

    Two impulsive sound sources, one using multiple acoustic drivers and the other using a spark discharge were developed to study the acoustic reflection characteristics of hard-walled wind tunnels, and the results of laboratory tests are presented. The analysis indicates that though the intensity of the pulse generated by the spark source was higher than that obtained from the acoustic source, the number of averages needed for a particular test may require an unacceptibly long tunnel-run time due to the low spark generation repeat rate because of capacitor charging time. The additional hardware problems associated with the longevity of electrodes and electrode holders in sustaining the impact of repetitive spark discharges, show the multidriver acoustic source to be more suitable for this application.

  15. Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget

    NASA Astrophysics Data System (ADS)

    Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong

    To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.

  16. Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.

    PubMed

    Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A

    2018-05-01

    Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  17. A Generic Authentication LoA Derivation Model

    NASA Astrophysics Data System (ADS)

    Yao, Li; Zhang, Ning

    One way of achieving a more fine-grained access control is to link an authentication level of assurance (LoA) derived from a requester’s authentication instance to the authorisation decision made to the requester. To realise this vision, there is a need for designing a LoA derivation model that supports the use and quantification of multiple LoA-effecting attributes, and analyse their composite effect on a given authentication instance. This paper reports the design of such a model, namely a generic LoA derivation model (GEA- LoADM). GEA-LoADM takes into account of multiple authentication attributes along with their relationships, abstracts the composite effect by the multiple attributes into a generic value, authentication LoA, and provides algorithms for the run-time derivation of LoA. The algorithms are tailored to reflect the relationships among the attributes involved in an authentication instance. The model has a number of valuable properties, including flexibility and extensibility; it can be applied to different application contexts and support easy addition of new attributes and removal of obsolete ones.

  18. SparkJet Efficiency

    NASA Technical Reports Server (NTRS)

    Golbabaei-Asl, Mona; Knight, Doyle; Anderson, Kellie; Wilkinson, Stephen

    2013-01-01

    A novel method for determining the thermal efficiency of the SparkJet is proposed. A SparkJet is attached to the end of a pendulum. The motion of the pendulum subsequent to a single spark discharge is measured using a laser displacement sensor. The measured displacement vs time is compared with the predictions of a theoretical perfect gas model to estimate the fraction of the spark discharge energy which results in heating the gas (i.e., increasing the translational-rotational temperature). The results from multiple runs for different capacitances of c = 3, 5, 10, 20, and 40 micro-F show that the thermal efficiency decreases with higher capacitive discharges.

  19. Towards Formal Verification of a Separation Microkernel

    NASA Astrophysics Data System (ADS)

    Butterfield, Andrew; Sanan, David; Hinchey, Mike

    2013-08-01

    The best approach to verifying an IMA separation kernel is to use a (fixed) time-space partitioning kernel with a multiple independent levels of separation (MILS) architecture. We describe an activity that explores the cost and feasibility of doing a formal verification of such a kernel to the Common Criteria (CC) levels mandated by the Separation Kernel Protection Profile (SKPP). We are developing a Reference Specification of such a kernel, and are using higher-order logic (HOL) to construct formal models of this specification and key separation properties. We then plan to do a dry run of part of a formal proof of those properties using the Isabelle/HOL theorem prover.

  20. Daytime Water Detection by Fusing Multiple Cues for Autonomous Off-Road Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, A. L.; Matthies, L. H.; Huertas, A.

    2004-01-01

    Detecting water hazards is a significant challenge to unmanned ground vehicle autonomous off-road navigation. This paper focuses on detecting the presence of water during the daytime using color cameras. A multi-cue approach is taken. Evidence of the presence of water is generated from color, texture, and the detection of reflections in stereo range data. A rule base for fusing water cues was developed by evaluating detection results from an extensive archive of data collection imagery containing water. This software has been implemented into a run-time passive perception subsystem and tested thus far under Linux on a Pentium based processor.

  1. Ten quick tips for machine learning in computational biology.

    PubMed

    Chicco, Davide

    2017-01-01

    Machine learning has become a pivotal tool for many projects in computational biology, bioinformatics, and health informatics. Nevertheless, beginners and biomedical researchers often do not have enough experience to run a data mining project effectively, and therefore can follow incorrect practices, that may lead to common mistakes or over-optimistic results. With this review, we present ten quick tips to take advantage of machine learning in any computational biology context, by avoiding some common errors that we observed hundreds of times in multiple bioinformatics projects. We believe our ten suggestions can strongly help any machine learning practitioner to carry on a successful project in computational biology and related sciences.

  2. Walking, running, and resting under time, distance, and average speed constraints: optimality of walk–run–rest mixtures

    PubMed Central

    Long, Leroy L.; Srinivasan, Manoj

    2013-01-01

    On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192

  3. BnmrOffice: A Free Software for β-nmr Data Analysis

    NASA Astrophysics Data System (ADS)

    Saadaoui, Hassan

    A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.

  4. Lower-volume muscle-damaging exercise protects against high-volume muscle-damaging exercise and the detrimental effects on endurance performance.

    PubMed

    Burt, Dean; Lamb, Kevin; Nicholas, Ceri; Twist, Craig

    2015-07-01

    This study examined whether lower-volume exercise-induced muscle damage (EIMD) performed 2 weeks before high-volume muscle-damaging exercise protects against its detrimental effect on running performance. Sixteen male participants were randomly assigned to a lower-volume (five sets of ten squats, n = 8) or high-volume (ten sets of ten squats, n = 8) EIMD group and completed baseline measurements for muscle soreness, knee extensor torque, creatine kinase (CK), a 5-min fixed-intensity running bout and a 3-km running time-trial. Measurements were repeated 24 and 48 h after EIMD, and the running time-trial after 48 h. Two weeks later, both groups repeated the baseline measurements, ten sets of ten squats and the same follow-up testing (Bout 2). Data analysis revealed increases in muscle soreness and CK and decreases in knee extensor torque 24-48 h after the initial bouts of EIMD. Increases in oxygen uptake [Formula: see text], minute ventilation [Formula: see text] and rating of perceived exertion were observed during fixed-intensity running 24-48 h after EIMD Bout 1. Likewise, time increased and speed and [Formula: see text] decreased during a 3-km running time-trial 48 h after EIMD. Symptoms of EIMD, responses during fixed-intensity and running time-trial were attenuated in the days after the repeated bout of high-volume EIMD performed 2 weeks after the initial bout. This study demonstrates that the protective effect of lower-volume EIMD on subsequent high-volume EIMD is transferable to endurance running. Furthermore, time-trial performance was found to be preserved after a repeated bout of EIMD.

  5. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  6. Multiple Core Galaxies

    NASA Technical Reports Server (NTRS)

    Miller, R.H.; Morrison, David (Technical Monitor)

    1994-01-01

    Nuclei of galaxies often show complicated density structures and perplexing kinematic signatures. In the past we have reported numerical experiments indicating a natural tendency for galaxies to show nuclei offset with respect to nearby isophotes and for the nucleus to have a radial velocity different from the galaxy's systemic velocity. Other experiments show normal mode oscillations in galaxies with large amplitudes. These oscillations do not damp appreciably over a Hubble time. The common thread running through all these is that galaxies often show evidence of ringing, bouncing, or sloshing around in unexpected ways, even though they have not been disturbed by any external event. Recent observational evidence shows yet another phenomenon indicating the dynamical complexity of central regions of galaxies: multiple cores (M31, Markarian 315 and 463 for example). These systems can hardly be static. We noted long-lived multiple core systems in galaxies in numerical experiments some years ago, and we have more recently followed up with a series of experiments on multiple core galaxies, starting with two cores. The relevant parameters are the energy in the orbiting clumps, their relative.masses, the (local) strength of the potential well representing the parent galaxy, and the number of cores. We have studied the dependence of the merger rates and the nature of the final merger product on these parameters. Individual cores survive much longer in stronger background potentials. Cores can survive for a substantial fraction of a Hubble time if they travel on reasonable orbits.

  7. Effect of cycle run time of backwash and relaxation on membrane fouling removal in submerged membrane bioreactor treating sewage at higher flux.

    PubMed

    Tabraiz, Shamas; Haydar, Sajjad; Sallis, Paul; Nasreen, Sadia; Mahmood, Qaisar; Awais, Muhammad; Acharya, Kishor

    2017-08-01

    Intermittent backwashing and relaxation are mandatory in the membrane bioreactor (MBR) for its effective operation. The objective of the current study was to evaluate the effects of run-relaxation and run-backwash cycle time on fouling rates. Furthermore, comparison of the effects of backwashing and relaxation on the fouling behavior of membrane in high rate submerged MBR. The study was carried out on a laboratory scale MBR at high flux (30 L/m 2 ·h), treating sewage. The MBR was operated at three relaxation operational scenarios by keeping the run time to relaxation time ratio constant. Similarly, the MBR was operated at three backwashing operational scenarios by keeping the run time to backwashing time ratio constant. The results revealed that the provision of relaxation or backwashing at small intervals prolonged the MBR operation by reducing fouling rates. The cake and pores fouling rates in backwashing scenarios were far less as compared to the relaxation scenarios, which proved backwashing a better option as compared to relaxation. The operation time of backwashing scenario (lowest cycle time) was 64.6% and 21.1% more as compared to continuous scenario and relaxation scenario (lowest cycle time), respectively. Increase in cycle time increased removal efficiencies insignificantly, in both scenarios of relaxation and backwashing.

  8. Data imputation analysis for Cosmic Rays time series

    NASA Astrophysics Data System (ADS)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  9. Evolution and convergence of the patterns of international scientific collaboration.

    PubMed

    Coccia, Mario; Wang, Lili

    2016-02-23

    International research collaboration plays an important role in the social construction and evolution of science. Studies of science increasingly analyze international collaboration across multiple organizations for its impetus in improving research quality, advancing efficiency of the scientific production, and fostering breakthroughs in a shorter time. However, long-run patterns of international research collaboration across scientific fields and their structural changes over time are hardly known. Here we show the convergence of international scientific collaboration across research fields over time. Our study uses a dataset by the National Science Foundation and computes the fraction of papers that have international institutional coauthorships for various fields of science. We compare our results with pioneering studies carried out in the 1970s and 1990s by applying a standardization method that transforms all fractions of internationally coauthored papers into a comparable framework. We find, over 1973-2012, that the evolution of collaboration patterns across scientific disciplines seems to generate a convergence between applied and basic sciences. We also show that the general architecture of international scientific collaboration, based on the ranking of fractions of international coauthorships for different scientific fields per year, has tended to be unchanged over time, at least until now. Overall, this study shows, to our knowledge for the first time, the evolution of the patterns of international scientific collaboration starting from initial results described by literature in the 1970s and 1990s. We find a convergence of these long-run collaboration patterns between the applied and basic sciences. This convergence might be one of contributing factors that supports the evolution of modern scientific fields.

  10. Correlated Observations, the Law of Small Numbers and Bank Runs

    PubMed Central

    2016-01-01

    Empirical descriptions and studies suggest that generally depositors observe a sample of previous decisions before deciding if to keep their funds deposited or to withdraw them. These observed decisions may exhibit different degrees of correlation across depositors. In our model depositors decide sequentially and are assumed to follow the law of small numbers in the sense that they believe that a bank run is underway if the number of observed withdrawals in their sample is large. Theoretically, with highly correlated samples and infinite depositors runs occur with certainty, while with random samples it needs not be the case, as for many parameter settings the likelihood of bank runs is zero. We investigate the intermediate cases and find that i) decreasing the correlation and ii) increasing the sample size reduces the likelihood of bank runs, ceteris paribus. Interestingly, the multiplicity of equilibria, a feature of the canonical Diamond-Dybvig model that we use also, disappears almost completely in our setup. Our results have relevant policy implications. PMID:27035435

  11. Export product diversification and the environmental Kuznets curve: evidence from Turkey.

    PubMed

    Gozgor, Giray; Can, Muhlis

    2016-11-01

    Countries try to stabilize the demand for energy on one hand and sustain economic growth on the other, but the worsening global warming and climate change problems have put pressure on them. This paper estimates the environmental Kuznets curve over the period 1971-2010 in Turkey both in the short and the long run. For this purpose, the unit root test with structural breaks and the cointegration analysis with multiple endogenous structural breaks are used. The effects of energy consumption and export product diversification on CO 2 emissions are also controlled in the dynamic empirical models. It is observed that the environmental Kuznets curve hypothesis is valid in Turkey in both the short run and the long run. The positive effect on energy consumption on CO 2 emissions is also obtained in the long run. In addition, it is found that a greater product diversification of exports yields higher CO 2 emissions in the long run. Inferences and policy implications are also discussed.

  12. Correlated Observations, the Law of Small Numbers and Bank Runs.

    PubMed

    Horváth, Gergely; Kiss, Hubert János

    2016-01-01

    Empirical descriptions and studies suggest that generally depositors observe a sample of previous decisions before deciding if to keep their funds deposited or to withdraw them. These observed decisions may exhibit different degrees of correlation across depositors. In our model depositors decide sequentially and are assumed to follow the law of small numbers in the sense that they believe that a bank run is underway if the number of observed withdrawals in their sample is large. Theoretically, with highly correlated samples and infinite depositors runs occur with certainty, while with random samples it needs not be the case, as for many parameter settings the likelihood of bank runs is zero. We investigate the intermediate cases and find that i) decreasing the correlation and ii) increasing the sample size reduces the likelihood of bank runs, ceteris paribus. Interestingly, the multiplicity of equilibria, a feature of the canonical Diamond-Dybvig model that we use also, disappears almost completely in our setup. Our results have relevant policy implications.

  13. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  14. Informedia at TRECVID2014: MED and MER, Semantic Indexing, Surveillance Event Detection

    DTIC Science & Technology

    2014-11-10

    multiple ranked lists for a given system query. Our system incorporates various retrieval methods such as Vector Space Model, tf-idf, BM25, language...separable space before applying the linear classifier. As the EFM is an approximation, we run the risk of a slight drop in performance. Figure 4 shows...validation set are fused. • CMU_Run3: After removing junk shots (by the junk /black frame detectors), MultiModal Pseudo Relevance Feedback (MMPRF) [12

  15. Development of a one-run real-time PCR detection system for pathogens associated with bovine respiratory disease complex.

    PubMed

    Kishimoto, Mai; Tsuchiaka, Shinobu; Rahpaya, Sayed Samim; Hasebe, Ayako; Otsu, Keiko; Sugimura, Satoshi; Kobayashi, Suguru; Komatsu, Natsumi; Nagai, Makoto; Omatsu, Tsutomu; Naoi, Yuki; Sano, Kaori; Okazaki-Terashima, Sachiko; Oba, Mami; Katayama, Yukie; Sato, Reiichiro; Asai, Tetsuo; Mizutani, Tetsuya

    2017-03-18

    Bovine respiratory disease complex (BRDC) is frequently found in cattle worldwide. The etiology of BRDC is complicated by infections with multiple pathogens, making identification of the causal pathogen difficult. Here, we developed a detection system by applying TaqMan real-time PCR (Dembo respiratory-PCR) to screen a broad range of microbes associated with BRDC in a single run. We selected 16 bovine respiratory pathogens (bovine viral diarrhea virus, bovine coronavirus, bovine parainfluenza virus 3, bovine respiratory syncytial virus, influenza D virus, bovine rhinitis A virus, bovine rhinitis B virus, bovine herpesvirus 1, bovine adenovirus 3, bovine adenovirus 7, Mannheimia haemolytica, Pasteurella multocida, Histophilus somni, Trueperella pyogenes, Mycoplasma bovis and Ureaplasma diversum) as detection targets and designed novel specific primer-probe sets for nine of them. The assay performance was assessed using standard curves from synthesized DNA. In addition, the sensitivity of the assay was evaluated by spiking solutions extracted from nasal swabs that were negative by Dembo respiratory-PCR for nucleic acids of pathogens or synthesized DNA. All primer-probe sets showed high sensitivity. In this study, a total of 40 nasal swab samples from cattle on six farms were tested by Dembo respiratory-PCR. Dembo respiratory-PCR can be applied as a screening system with wide detection targets.

  16. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  17. Spaceborne Processor Array

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  18. Attenuation of foot pressure during running on four different surfaces: asphalt, concrete, rubber, and natural grass.

    PubMed

    Tessutti, Vitor; Ribeiro, Ana Paula; Trombini-Souza, Francis; Sacco, Isabel C N

    2012-01-01

    The practice of running has consistently increased worldwide, and with it, related lower limb injuries. The type of running surface has been associated with running injury etiology, in addition other factors, such as the relationship between the amount and intensity of training. There is still controversy in the literature regarding the biomechanical effects of different types of running surfaces on foot-floor interaction. The aim of this study was to investigate the influence of running on asphalt, concrete, natural grass, and rubber on in-shoe pressure patterns in adult recreational runners. Forty-seven adult recreational runners ran twice for 40 m on all four different surfaces at 12 ± 5% km · h(-1). Peak pressure, pressure-time integral, and contact time were recorded by Pedar X insoles. Asphalt and concrete were similar for all plantar variables and pressure zones. Running on grass produced peak pressures 9.3% to 16.6% lower (P < 0.001) than the other surfaces in the rearfoot and 4.7% to 12.3% (P < 0.05) lower in the forefoot. The contact time on rubber was greater than on concrete for the rearfoot and midfoot. The behaviour of rubber was similar to that obtained for the rigid surfaces - concrete and asphalt - possibly because of its time of usage (five years). Running on natural grass attenuates in-shoe plantar pressures in recreational runners. If a runner controls the amount and intensity of practice, running on grass may reduce the total stress on the musculoskeletal system compared with the total musculoskeletal stress when running on more rigid surfaces, such as asphalt and concrete.

  19. PS3 CELL Development for Scientific Computation and Research

    NASA Astrophysics Data System (ADS)

    Christiansen, M.; Sevre, E.; Wang, S. M.; Yuen, D. A.; Liu, S.; Lyness, M. D.; Broten, M.

    2007-12-01

    The Cell processor is one of the most powerful processors on the market, and researchers in the earth sciences may find its parallel architecture to be very useful. A cell processor, with 7 cores, can easily be obtained for experimentation by purchasing a PlayStation 3 (PS3) and installing linux and the IBM SDK. Each core of the PS3 is capable of 25 GFLOPS giving a potential limit of 150 GFLOPS when using all 6 SPUs (synergistic processing units) by using vectorized algorithms. We have used the Cell's computational power to create a program which takes simulated tsunami datasets, parses them, and returns a colorized height field image using ray casting techniques. As expected, the time required to create an image is inversely proportional to the number of SPUs used. We believe that this trend will continue when multiple PS3s are chained using OpenMP functionality and are in the process of researching this. By using the Cell to visualize tsunami data, we have found that its greatest feature is its power. This fact entwines well with the needs of the scientific community where the limiting factor is time. Any algorithm, such as the heat equation, that can be subdivided into multiple parts can take advantage of the PS3 Cell's ability to split the computations across the 6 SPUs reducing required run time by one sixth. Further vectorization of the code can allow for 4 simultanious floating point operations by using the SIMD (single instruction multiple data) capabilities of the SPU increasing efficiency 24 times.

  20. Nocturnal to Diurnal Switches with Spontaneous Suppression of Wheel-Running Behavior in a Subterranean Rodent

    PubMed Central

    Tachinardi, Patricia; Tøien, Øivind; Valentinuzzi, Veronica S.; Buck, C. Loren; Oda, Gisele A.

    2015-01-01

    Several rodent species that are diurnal in the field become nocturnal in the lab. It has been suggested that the use of running-wheels in the lab might contribute to this timing switch. This proposition is based on studies that indicate feed-back of vigorous wheel-running on the period and phase of circadian clocks that time daily activity rhythms. Tuco-tucos (Ctenomys aff. knighti) are subterranean rodents that are diurnal in the field but are robustly nocturnal in laboratory, with or without access to running wheels. We assessed their energy metabolism by continuously and simultaneously monitoring rates of oxygen consumption, body temperature, general motor and wheel running activity for several days in the presence and absence of wheels. Surprisingly, some individuals spontaneously suppressed running-wheel activity and switched to diurnality in the respirometry chamber, whereas the remaining animals continued to be nocturnal even after wheel removal. This is the first report of timing switches that occur with spontaneous wheel-running suppression and which are not replicated by removal of the wheel. PMID:26460828

  1. MIMS for TRIM

    EPA Pesticide Factsheets

    MIMS supports complex computational studies that use multiple interrelated models / programs, such as the modules within TRIM. MIMS is used by TRIM to run various models in sequence, while sharing input and output files.

  2. Can anti-gravity running improve performance to the same degree as over-ground running?

    PubMed

    Brennan, Christopher T; Jenkins, David G; Osborne, Mark A; Oyewale, Michael; Kelly, Vincent G

    2018-03-11

    This study examined the changes in running performance, maximal blood lactate concentrations and running kinematics between 85%BM anti-gravity (AG) running and normal over-ground (OG) running over an 8-week training period. Fifteen elite male developmental cricketers were assigned to either the AG or over-ground (CON) running group. The AG group (n = 7) ran twice a week on an AG treadmill and once per week over-ground. The CON group (n = 8) completed all sessions OG on grass. Both AG and OG training resulted in similar improvements in time trial and shuttle run performance. Maximal running performance showed moderate differences between the groups, however the AG condition resulted in less improvement. Large differences in maximal blood lactate concentrations existed with OG running resulting in greater improvements in blood lactate concentrations measured during maximal running. Moderate increases in stride length paired with moderate decreases in stride rate also resulted from AG training. The use of AG training to supplement regular OG training for performance should be used cautiously, as extended use over long periods of time could lead to altered stride mechanics and reduced blood lactate.

  3. Scalable and responsive event processing in the cloud

    PubMed Central

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-01

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164

  4. ICESat-2 laser Nd:YVO4 amplifier

    NASA Astrophysics Data System (ADS)

    Sawruk, Nicholas W.; Burns, Patrick M.; Edwards, Ryan E.; Litvinovitch, Viatcheslav; Martin, Nigel; Witt, Greg; Fakhoury, Elias; Iskander, John; Pronko, Mark S.; Troupaki, Elisavet; Bay, Michael M.; He, Charles C.; Wang, Liqin L.; Cavanaugh, John F.; Farrokh, Babak; Salem, Jonathan A.; Baker, Eric

    2018-02-01

    We report on the cause and corrective actions of three amplifier crystal fractures in the space-qualified laser systems used in NASA Goddard Space Flight Center's (GSFC) Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2). The ICESat-2 lasers each contain three end-pumped Nd:YVOO4 amplifier stages. The crystals are clamped between two gold plated copper heat spreaders with an indium foil thermal interface material, and the crystal fractures occurred after multiple years of storage and over a year of operational run-time. The primary contributors are high compressive loading of the NdYVO4 crystals at the beginning of life, a time dependent crystal stress caused by an intermetallic reaction of the gold plating and indium, and slow crack growth resulting in a reduction in crystal strength over time. An updated crystal mounting scheme was designed, analyzed, fabricated and tested. Thee fracture slab failure analysis, finite-element modeling and corrective actions are presented.

  5. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Trujillo, Anna; Pritchett, Amy R.

    2000-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plug-in' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  6. Rapidly Re-Configurable Flight Simulator Tools for Crew Vehicle Integration Research and Design

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy R.

    2002-01-01

    While simulation is a valuable research and design tool, the time and difficulty required to create new simulations (or re-use existing simulations) often limits their application. This report describes the design of the software architecture for the Reconfigurable Flight Simulator (RFS), which provides a robust simulation framework that allows the simulator to fulfill multiple research and development goals. The core of the architecture provides the interface standards for simulation components, registers and initializes components, and handles the communication between simulation components. The simulation components are each a pre-compiled library 'plugin' module. This modularity allows independent development and sharing of individual simulation components. Additional interfaces can be provided through the use of Object Data/Method Extensions (OD/ME). RFS provides a programmable run-time environment for real-time access and manipulation, and has networking capabilities using the High Level Architecture (HLA).

  7. Wheel running decreases palatable diet preference in Sprague-Dawley rats.

    PubMed

    Moody, Laura; Liang, Joy; Choi, Pique P; Moran, Timothy H; Liang, Nu-Chu

    2015-10-15

    Physical activity has beneficial effects on not only improving some disease conditions but also by preventing the development of multiple disorders. Experiments in this study examined the effects of wheel running on intakes of chow and palatable diet e.g. high fat (HF) or high sucrose (HS) diet in male and female Sprague-Dawley rats. Experiment 1 demonstrated that acute wheel running results in robust HF or HS diet avoidance in male rats. Although female rats with running wheel access initially showed complete avoidance of the two palatable diets, the avoidance of the HS diet was transient. Experiment 2 demonstrated that male rats developed decreased HF diet preferences regardless of the order of diet and wheel running access presentation. Running associated changes in HF diet preference in females, on the other hand, depended on the testing schedule. In female rats, simultaneous presentation of the HF diet and running access resulted in transient complete HF diet avoidance whereas running experience prior to HF diet access did not affect the high preference for the HF diet. Ovariectomy in females resulted in HF diet preference patterns that were similar to those in male rats during simultaneous exposure of HF and wheel running access but similar to intact females when running occurred before HF exposure. Overall, the results demonstrated wheel running associated changes in palatable diet preferences that were in part sex dependent. Furthermore, ovarian hormones play a role in some of the sex differences. These data reveal complexity in the mechanisms underlying exercise associated changes in palatable diet preference. Published by Elsevier Inc.

  8. Wheel running decreases palatable diet preference in Sprague-Dawley rats

    PubMed Central

    Moody, Laura; Liang, Joy; Choi, Pique P.; Moran, Timothy H.; Liang, Nu-Chu

    2015-01-01

    Physical activity has beneficial effects on not only improving some disease conditions but also by preventing the development of multiple disorders. Experiments in this study examined the effects of wheel running on intakes of chow and palatable diet e.g. high fat (HF) or high sucrose (HS) diet in male and female Sprague Dawley rats. Experiment 1 demonstrated that acute wheel running results in robust HF or HS diet avoidance in male rats. Although female rats with running wheel access initially showed complete avoidance of the two palatable diets, the avoidance of the HS diet was transient. Experiment 2 demonstrated that male rats developed decreased HF diet preferences regardless of the order of diet and wheel running access presentation. Running associated changes in HF diet preference in females, on the other hand, depended on the testing schedule. In female rats, simultaneous presentation of the HF diet and running access resulted in transient complete HF diet avoidance whereas running experience prior to HF diet access did not affect the high preference for the HF diet. Ovariectomy in females resulted in HF diet preference patterns that were similar to those in male rats during simultaneous exposure of HF and wheel running access but similar to intact females when running occurred before HF exposure. Overall, the results demonstrated wheel running associated changes in palatable diet preferences that were in part sex dependent. Furthermore, ovarian hormones play a role in some of the sex differences. These data reveal complexity in the mechanisms underlying exercise associated changes in palatable diet preference. PMID:25791204

  9. 40 CFR Table 1a to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...

  10. 40 CFR Table 1a to Subpart Ce of... - Emissions Limits for Small, Medium, and Large HMIWI at Designated Facilities as Defined in § 60...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...

  11. Feasibility and Reliability of Two Different Walking Tests in People with Severe Intellectual and Sensory Disabilities

    ERIC Educational Resources Information Center

    Waninge, A.; Evenhuis, I. J.; van Wijck, R.; van der Schans, C. P.

    2011-01-01

    Background: The purpose of this study is to describe feasibility and test-retest reliability of the six-minute walking distance test (6MWD) and an adapted shuttle run test (aSRT) in persons with severe intellectual and sensory (multiple) disabilities. Materials and Methods: Forty-seven persons with severe multiple disabilities, with Gross Motor…

  12. Correlates of Adherence to a Telephone-Based Multiple Health Behavior Change Cancer Preventive Intervention for Teens: The Healthy for Life Program (HELP)

    ERIC Educational Resources Information Center

    Mays, Darren; Peshkin, Beth N.; Sharff, McKane E.; Walker, Leslie R.; Abraham, Anisha A.; Hawkins, Kirsten B.; Tercyak, Kenneth P.

    2012-01-01

    This study examined factors associated with teens' adherence to a multiple health behavior cancer preventive intervention. Analyses identified predictors of trial enrollment, run-in completion, and adherence (intervention initiation, number of sessions completed). Of 104 teens screened, 73% (n = 76) were trial eligible. White teens were more…

  13. Regulation of step frequency in transtibial amputee endurance athletes using a running-specific prosthesis.

    PubMed

    Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han

    2017-01-25

    Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Sex differences in association of race performance, skin-fold thicknesses, and training variables for recreational half-marathon runners.

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Senn, Oliver

    2010-12-01

    The purpose of this study was to investigate the association between selected skin-fold thicknesses and training variables with a half-marathon race time, for both male and female recreational runners, using bi- and multivariate analysis. In 52 men, two skin-fold thicknesses (abdominal and calf) were significantly and positively correlated with race time; whereas in 15 women, five (pectoral, mid-axilla, subscapular, abdominal, and suprailiac) showed positive and significant relations with total race time. In men, the mean weekly running distance, minimum distance run per week, maximum distance run per week, mean weekly hours of running, number of running training sessions per week, and mean speed of the training sessions were significantly and negatively related to total race time, but not in women. Interaction analyses suggested that race time was more strongly associated with anthropometry in women than men. Race time for the women was independently associated with the sum of eight skin-folds; but for the men, only the mean speed during training sessions was independently associated. Skin-fold thicknesses and training variables in these groups were differently related to race time according to their sex.

  15. American Academy of Podiatric Sports Medicine

    MedlinePlus

    ... Runblogger Running Product Reviews Running Research Junkie Running Times The ... © American Academy of Podiatric Sports Medicine Website Design, Maintenance and Hosting by Catalyst Marketing / Worry Free ...

  16. Jumping and hopping in elite and amateur orienteering athletes and correlations to sprinting and running.

    PubMed

    Hébert-Losier, Kim; Jensen, Kurt; Holmberg, Hans-Christer

    2014-11-01

    Jumping and hopping are used to measure lower-body muscle power, stiffness, and stretch-shortening-cycle utilization in sports, with several studies reporting correlations between such measures and sprinting and/or running abilities in athletes. Neither jumping and hopping nor correlations with sprinting and/or running have been examined in orienteering athletes. The authors investigated squat jump (SJ), countermovement jump (CMJ), standing long jump (SLJ), and hopping performed by 8 elite and 8 amateur male foot-orienteering athletes (29 ± 7 y, 183 ± 5 cm, 73 ± 7 kg) and possible correlations to road, path, and forest running and sprinting performance, as well as running economy, velocity at anaerobic threshold, and peak oxygen uptake (VO(2peak)) from treadmill assessments. During SJs and CMJs, elites demonstrated superior relative peak forces, times to peak force, and prestretch augmentation, albeit lower SJ heights and peak powers. Between-groups differences were unclear for CMJ heights, hopping stiffness, and most SLJ parameters. Large pairwise correlations were observed between relative peak and time to peak forces and sprinting velocities; time to peak forces and running velocities; and prestretch augmentation and forest-running velocities. Prestretch augmentation and time to peak forces were moderately correlated to VO(2peak). Correlations between running economy and jumping or hopping were small or trivial. Overall, the elites exhibited superior stretch-shortening-cycle utilization and rapid generation of high relative maximal forces, especially vertically. These functional measures were more closely related to sprinting and/or running abilities, indicating benefits of lower-body training in orienteering.

  17. A nano ultra-performance liquid chromatography-high resolution mass spectrometry approach for global metabolomic profiling and case study on drug-resistant multiple myeloma.

    PubMed

    Jones, Drew R; Wu, Zhiping; Chauhan, Dharminder; Anderson, Kenneth C; Peng, Junmin

    2014-04-01

    Global metabolomics relies on highly reproducible and sensitive detection of a wide range of metabolites in biological samples. Here we report the optimization of metabolome analysis by nanoflow ultraperformance liquid chromatography coupled to high-resolution orbitrap mass spectrometry. Reliable peak features were extracted from the LC-MS runs based on mandatory detection in duplicates and additional noise filtering according to blank injections. The run-to-run variation in peak area showed a median of 14%, and the false discovery rate during a mock comparison was evaluated. To maximize the number of peak features identified, we systematically characterized the effect of sample loading amount, gradient length, and MS resolution. The number of features initially rose and later reached a plateau as a function of sample amount, fitting a hyperbolic curve. Longer gradients improved unique feature detection in part by time-resolving isobaric species. Increasing the MS resolution up to 120000 also aided in the differentiation of near isobaric metabolites, but higher MS resolution reduced the data acquisition rate and conferred no benefits, as predicted from a theoretical simulation of possible metabolites. Moreover, a biphasic LC gradient allowed even distribution of peak features across the elution, yielding markedly more peak features than the linear gradient. Using this robust nUPLC-HRMS platform, we were able to consistently analyze ~6500 metabolite features in a single 60 min gradient from 2 mg of yeast, equivalent to ~50 million cells. We applied this optimized method in a case study of drug (bortezomib) resistant and drug-sensitive multiple myeloma cells. Overall, 18% of metabolite features were matched to KEGG identifiers, enabling pathway enrichment analysis. Principal component analysis and heat map data correctly clustered isogenic phenotypes, highlighting the potential for hundreds of small molecule biomarkers of cancer drug resistance.

  18. Enhanced methodology of focus control and monitoring on scanner tool

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Jen; Kim, Young Ki; Hao, Xueli; Gomez, Juan-Manuel; Tian, Ye; Kamalizadeh, Ferhad; Hanson, Justin K.

    2017-03-01

    As the demand of the technology node shrinks from 14nm to 7nm, the reliability of tool monitoring techniques in advanced semiconductor fabs to achieve high yield and quality becomes more critical. Tool health monitoring methods involve periodic sampling of moderately processed test wafers to detect for particles, defects, and tool stability in order to ensure proper tool health. For lithography TWINSCAN scanner tools, the requirements for overlay stability and focus control are very strict. Current scanner tool health monitoring methods include running BaseLiner to ensure proper tool stability on a periodic basis. The focus measurement on YIELDSTAR by real-time or library-based reconstruction of critical dimensions (CD) and side wall angle (SWA) has been demonstrated as an accurate metrology input to the control loop. The high accuracy and repeatability of the YIELDSTAR focus measurement provides a common reference of scanner setup and user process. In order to further improve the metrology and matching performance, Diffraction Based Focus (DBF) metrology enabling accurate, fast, and non-destructive focus acquisition, has been successfully utilized for focus monitoring/control of TWINSCAN NXT immersion scanners. The optimal DBF target was determined to have minimized dose crosstalk, dynamic precision, set-get residual, and lens aberration sensitivity. By exploiting this new measurement target design, 80% improvement in tool-to-tool matching, >16% improvement in run-to-run mean focus stability, and >32% improvement in focus uniformity have been demonstrated compared to the previous BaseLiner methodology. Matching <2.4 nm across multiple NXT immersion scanners has been achieved with the new methodology of set baseline reference. This baseline technique, with either conventional BaseLiner low numerical aperture (NA=1.20) mode or advanced illumination high NA mode (NA=1.35), has also been evaluated to have consistent performance. This enhanced methodology of focus control and monitoring on multiple illumination conditions, opens an avenue to significantly reduce Focus-Exposure Matrix (FEM) wafer exposure for new product/layer best focus (BF) setup.

  19. Short-term changes in running mechanics and foot strike pattern after introduction to minimalistic footwear.

    PubMed

    Willson, John D; Bjorhus, Jordan S; Williams, D S Blaise; Butler, Robert J; Porcari, John P; Kernozek, Thomas W

    2014-01-01

    Minimalistic footwear has garnered widespread interest in the running community, based largely on the premise that the footwear may reduce certain running-related injury risk factors through adaptations in running mechanics and foot strike pattern. To examine short-term adaptations in running mechanics among runners who typically run in conventional cushioned heel running shoes as they transition to minimalistic footwear. A 2-week, prospective, observational study. A movement science laboratory. Nineteen female runners with a rear foot strike (RFS) pattern who usually train in conventional running shoes. The participants trained for 20 minutes, 3 times per week for 2 weeks by using minimalistic footwear. Three-dimensional lower extremity running mechanics were analyzed before and after this 2-week period. Hip, knee, and ankle joint kinematics at initial contact; step length; stance time; peak ankle joint moment and joint work; impact peak; vertical ground reaction force loading rate; and foot strike pattern preference were evaluated before and after the intervention. The knee flexion angle at initial contact increased 3.8° (P < .01), but the ankle and hip flexion angles at initial contact did not change after training. No changes in ankle joint kinetics or running temporospatial parameters were observed. The majority of participants (71%), before the intervention, demonstrated an RFS pattern while running in minimalistic footwear. The proportion of runners with an RFS pattern did not decrease after 2 weeks (P = .25). Those runners who chose an RFS pattern in minimalistic shoes experienced a vertical loading rate that was 3 times greater than those who chose to run with a non-RFS pattern. Few systematic changes in running mechanics were observed among participants after 2 weeks of training in minimalistic footwear. The majority of the participants continued to use an RFS pattern after training in minimalistic footwear, and these participants experienced higher vertical loading rates. Continued exposure to these greater loading rates may have detrimental effects over time. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  20. 77 FR 60165 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass September 26, 2012. I... FICC proposes to move the time at which its Mortgage-Backed Securities Division (``MBSD'') runs its... processing passes. MBSD currently runs its first processing pass of the day (historically referred to as the...

  1. Effect of Light/Dark Cycle on Wheel Running and Responding Reinforced by the Opportunity to Run Depends on Postsession Feeding Time

    ERIC Educational Resources Information Center

    Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.

    2008-01-01

    Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…

  2. Magnetosphere-Ionosphere Coupling During a Geomagnetic Substorm on March 1, 2017

    NASA Astrophysics Data System (ADS)

    Coster, A. J.; Hampton, D. L.; Sazykin, S. Y.; Wolf, R.; Huba, J.; Varney, R. H.; Reimer, A.; Lynch, K. A.; Samara, M.; Michell, R.

    2017-12-01

    On March 1, 2017, at approximately 10 UT, magnetometers at Ft Yukon and Poker Flat in Alaska measured the classic signature of an auroral substorm: a rapid decrease in the northward component of the magnetic field. Nearby, a camera at Venetie Alaska captured intensive visual brightening of multiple auroral arcs at approximately the same time. Our data and model analysis focuses on this time period. We are taking advantage of the extensive instrumentation that was in place in Northern Alaska on this date due to the ISINGLASS rocket campaign. Although no rockets were flown on March 1, 2017, this substorm was monitored at Poker by the three-filter all-sky survey and at Venetie by three all-sky cameras running simultaneously with each filtered for a different wavelength. Our analysis includes co-incidental high precision GNSS receiver data providing total electron content (TEC) measurements during the overhead auroral arcs. The receiver at Venetie also monitored L-band scintillation. In addition, the Poker Flat Incoherent Scatter radar captured the rapid ionization enhancement in the 100-200 km region across multiple beams looking to the north of Poker. The timing of these events between the multiple sites is closely monitored, and inferences of the propagation of this event are described. The available SuperDARN data from this time period indicates this substorm happened at about the same time within the Harang discontinuity. This event presented an unprecedented opportunity to observe occurrence and development of a substorm with a combination of ground-based remote sensing instruments. To support our interpretation of the data, we present first simulations of the magnetosphere-ionosphere coupled system during a substorm with the self-consistently coupled SAMI/RCM code.

  3. Speech and pause characteristics associated with voluntary rate reduction in Parkinson's disease and Multiple Sclerosis.

    PubMed

    Tjaden, Kris; Wilding, Greg

    2011-01-01

    The primary purpose of this study was to investigate how speakers with Parkinson's disease (PD) and Multiple Sclerosis (MS) accomplish voluntary reductions in speech rate. A group of talkers with no history of neurological disease was included for comparison. This study was motivated by the idea that knowledge of how speakers with dysarthria voluntarily accomplish a reduced speech rate would contribute toward a descriptive model of speaking rate change in dysarthria. Such a model has the potential to assist in identifying rate control strategies to receive focus in clinical treatment programs and also would advance understanding of global speech timing in dysarthria. All speakers read a passage in Habitual and Slow conditions. Speech rate, articulation rate, pause duration, and pause frequency were measured. All speaker groups adjusted articulation time as well as pause time to reduce overall speech rate. Group differences in how voluntary rate reduction was accomplished were primarily one of quantity or degree. Overall, a slower-than-normal rate was associated with a reduced articulation rate, shorter speech runs that included fewer syllables, and longer more frequent pauses. Taken together, these results suggest that existing skills or strategies used by patients should be emphasized in dysarthria training programs focusing on rate reduction. Results further suggest that a model of voluntary speech rate reduction based on neurologically normal speech shows promise as being applicable for mild to moderate dysarthria. The reader will be able to: (1) describe the importance of studying voluntary adjustments in speech rate in dysarthria, (2) discuss how speakers with Parkinson's disease and Multiple Sclerosis adjust articulation time and pause time to slow speech rate. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. The Influence of Running on Foot Posture and In-Shoe Plantar Pressures.

    PubMed

    Bravo-Aguilar, María; Gijón-Noguerón, Gabriel; Luque-Suarez, Alejandro; Abian-Vicen, Javier

    2016-03-01

    Running can be considered a high-impact practice, and most people practicing continuous running experience lower-limb injuries. The aim of this study was to determine the influence of 45 min of running on foot posture and plantar pressures. The sample comprised 116 healthy adults (92 men and 24 women) with no foot-related injuries. The mean ± SD age of the participants was 28.31 ± 6.01 years; body mass index, 23.45 ± 1.96; and training time, 11.02 ± 4.22 h/wk. Outcome measures were collected before and after 45 min of running at an average speed of 12 km/h, and included the Foot Posture Index (FPI) and a baropodometric analysis. The results show that foot posture can be modified after 45 min of running. The mean ± SD FPI changed from 6.15 ± 2.61 to 4.86 ± 2.65 (P < .001). Significant decreases in mean plantar pressures in the external, internal, rearfoot, and forefoot edges were found after 45 min of running. Peak plantar pressures in the forefoot decreased after running. The pressure-time integral decreased during the heel strike phase in the internal edge of the foot. In addition, a decrease was found in the pressure-time integral during the heel-off phase in the internal and rearfoot edges. The findings suggest that after 45 min of running, a pronated foot tends to change into a more neutral position, and decreased plantar pressures were found after the run.

  5. Satellite Data Processing System (SDPS) users manual V1.0

    NASA Technical Reports Server (NTRS)

    Caruso, Michael; Dunn, Chris

    1989-01-01

    SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.

  6. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    NASA Astrophysics Data System (ADS)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  7. Interpreting Beryllium-7 and Lead-210 fluxes and ratios for age dating fluvial sediments in Difficult Run Watershed, Virginia, USA

    NASA Astrophysics Data System (ADS)

    Karwan, D. L.; Pizzuto, J. E.; Skalak, K.; Benthem, A.

    2016-12-01

    The sources and transport of suspended sediments within watersheds of varying sizes remain an important area of study within the geosciences. Short term fallout radionuclides, such as Beryllium-7 (7Be) and Lead-210 (210Pb), and their ratios can be a valuable tool for gaining insight into suspended sediment transport dynamics. We use these techniques in combination with other sediment exchange and transport models to estimate residence and transport time of suspended sediment in nested reaches of the Difficult Run watershed (Virginia, USA) on timescales from storm events to centuries and longer. During several winter and spring 2015-2016 precipitation events, Beryllium-7 to excess Lead-210 ratios vary from 0.4 - 2.5 in direct channel precipitation and 0.2 - 1 on suspended sediment. Previously published age dating models would suggest that the suspended sediments were originally "tagged" by, or in contact with wet fallout of, by Beryllium7 fallout approximately 20-80 days before sampling. Sediments at the upstream reach (watershed size 14 km2) tend to be older ( 75 days), while sediments at the downstream reach (watershed size 117 km2) tend to be newer ( 20 days). We use multiple sediment transport models and hypothesize that fluvial sediments are tagged with direct channel precipitation between the upstream and downstream reach, explaining their apparently younger age. Our analysis includes error propagation as well as a comparison of radioisotope gamma analyses from different labs across multiple institutions.

  8. Two-Dimensional High-Lift Aerodynamic Optimization Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. The 'pressure difference rule,' which states that the maximum lift condition corresponds to a certain pressure difference between the peak suction pressure and the pressure at the trailing edge of the element, was applied and verified with experimental observations for this configuration. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural nets were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 44% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  9. Multi-profile Bayesian alignment model for LC-MS data analysis with integration of internal standards

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Di Poto, Cristina; Pannell, Lewis K.; Mechref, Yehia; Wang, Yue; Ressom, Habtom W.

    2013-01-01

    Motivation: Liquid chromatography-mass spectrometry (LC-MS) has been widely used for profiling expression levels of biomolecules in various ‘-omic’ studies including proteomics, metabolomics and glycomics. Appropriate LC-MS data preprocessing steps are needed to detect true differences between biological groups. Retention time (RT) alignment, which is required to ensure that ion intensity measurements among multiple LC-MS runs are comparable, is one of the most important yet challenging preprocessing steps. Current alignment approaches estimate RT variability using either single chromatograms or detected peaks, but do not simultaneously take into account the complementary information embedded in the entire LC-MS data. Results: We propose a Bayesian alignment model for LC-MS data analysis. The alignment model provides estimates of the RT variability along with uncertainty measures. The model enables integration of multiple sources of information including internal standards and clustered chromatograms in a mathematically rigorous framework. We apply the model to LC-MS metabolomic, proteomic and glycomic data. The performance of the model is evaluated based on ground-truth data, by measuring correlation of variation, RT difference across runs and peak-matching performance. We demonstrate that Bayesian alignment model improves significantly the RT alignment performance through appropriate integration of relevant information. Availability and implementation: MATLAB code, raw and preprocessed LC-MS data are available at http://omics.georgetown.edu/alignLCMS.html Contact: hwr@georgetown.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24013927

  10. Summary of Propagation Cases of the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram; Loubeau, Alexandra

    2017-01-01

    A summary is provided for the propagation portion of the second AIAA Sonic Boom Workshop held January 8, 2017 in conjunction with the AIAA SciTech 2017 conference. Near-field pressure waveforms for two cases were supplied and ground signatures at multiple azimuthal angles as well as their corresponding loudness metrics were requested from 10 participants, representing 3 countries. Each case had some required runs, as well as some optional runs. The required cases included atmospheric profiles with measured data including winds, using Radiosonde balloon data at multiple geographically spread locations, while the optional cases included temperature and pressure profiles from the US Standard atmosphere. The humidity profiles provided for the optional cases were taken from ANSI guidance, as the authors were unaware of an accepted standard at the time the cases were released to the participants. Participants provided ground signatures along with the requested data, including some loudness metrics using their best practices, which included lossy as well as lossless schemes. All the participants' submissions, for each case, are compared and discussed. Noise or loudness measures are calculated and detailed comparisons and statistical analyses are performed and presented. It has been observed that the variation in the loudness measures and spread between participants' submissions increased as the computation proceeded from under-track locations towards the lateral cut-off. Lessons learned during this workshop are discussed and recommendations are made for potential improvements and possible subsequent workshops as we collectively attempt to refine our analysis methods.

  11. An open source web interface for linking models to infrastructure system databases

    NASA Astrophysics Data System (ADS)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  12. Running speed during training and percent body fat predict race time in recreational male marathoners.

    PubMed

    Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-01-01

    Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.

  13. Distribution, stock composition and timing, and tagging response of wild Chinook Salmon returning to a large, free-flowing river basin

    USGS Publications Warehouse

    Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.

    2014-01-01

    Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.

  14. Calculating the renormalisation group equations of a SUSY model with Susyno

    NASA Astrophysics Data System (ADS)

    Fonseca, Renato M.

    2012-10-01

    Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features: Susyno contains functions that (a) calculate the Lagrangian of supersymmetric models and (b) calculate some group theoretical quantities. Some of these functions are available to the user and can be freely used. A built-in help system provides detailed information. Running time: Tests were made using a computer with an Intel Core i5 760 CPU, running under Ubuntu 11.04 and with Mathematica 8.0.1 installed. Using the option to suppress printing, the one- and two-loop beta functions of the MSSM were obtained in 2.5 s (NMSSM: 5.4 s). Note that the running time scales up very quickly with the total number of fields in the model. References: [1] S.P. Martin and M.T. Vaughn, Phys. Rev. D 50 (1994) 2282. [Erratum-ibid D 78 (2008) 039903] [arXiv:hep-ph/9311340]. [2] Y. Yamada, Phys. Rev. D 50 (1994) 3537 [arXiv:hep-ph/9401241].

  15. Running with a minimalist shoe increases plantar pressure in the forefoot region of healthy female runners.

    PubMed

    Bergstra, S A; Kluitenberg, B; Dekker, R; Bredeweg, S W; Postema, K; Van den Heuvel, E R; Hijmans, J M; Sobhani, S

    2015-07-01

    Minimalist running shoes have been proposed as an alternative to barefoot running. However, several studies have reported cases of forefoot stress fractures after switching from standard to minimalist shoes. Therefore, the aim of the current study was to investigate the differences in plantar pressure in the forefoot region between running with a minimalist shoe and running with a standard shoe in healthy female runners during overground running. Randomized crossover design. In-shoe plantar pressure measurements were recorded from eighteen healthy female runners. Peak pressure, maximum mean pressure, pressure time integral and instant of peak pressure were assessed for seven foot areas. Force time integral, stride time, stance time, swing time, shoe comfort and landing type were assessed for both shoe types. A linear mixed model was used to analyze the data. Peak pressure and maximum mean pressure were higher in the medial forefoot (respectively 13.5% and 7.46%), central forefoot (respectively 37.5% and 29.2%) and lateral forefoot (respectively 37.9% and 20.4%) for the minimalist shoe condition. Stance time was reduced with 3.81%. No relevant differences in shoe comfort or landing strategy were found. Running with a minimalist shoe increased plantar pressure without a change in landing pattern. This increased pressure in the forefoot region might play a role in the occurrence of metatarsal stress fractures in runners who switched to minimalist shoes and warrants a cautious approach to transitioning to minimalist shoe use. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  16. Effect of a prior intermittent run at vVO2max on oxygen kinetics during an all-out severe run in humans.

    PubMed

    Billat, V L; Bocquet, V; Slawinski, J; Laffite, L; Demarle, A; Chassaing, P; Koralsztein, J P

    2000-09-01

    The purpose of this study was to examine the influence of prior intermittent running at VO2max on oxygen kinetics during a continuous severe intensity run and the time spent at VO2max. Eight long-distance runners performed three maximal tests on a synthetic track (400 m) whilst breathing through the COSMED K4 portable telemetric metabolic analyser: i) an incremental test which determined velocity at the lactate threshold (vLT), VO2max and velocity associated with VO2max (vVO2max), ii) a continuous severe intensity run at vLT+50% (vdelta50) of the difference between vLT and vVO2max (91.3+/-1.6% VO2max)preceded by a light continuous 20 minute run at 50% of vVO2max (light warm-up), iii) the same continuous severe intensity run at vdelta50 with a prior interval training exercise (hard warm-up) of repeated hard running bouts performed at 100% of vVO2max and light running at 50% of vVO2max (of 30 seconds each) performed until exhaustion (on average 19+/-5 min with 19+/-5 interval repetitions). This hard warm-up speeded the VO2 kinetics: the time constant was reduced by 45% (28+/-7 sec vs 51+/-37 sec) and the slow component of VO2 (deltaVO2 6-3 min) was deleted (-143+/-271 ml x min(-1) vs 291+/-153 ml x min(-1)). In conclusion, despite a significantly lower total run time at vdelta50 (6 min 19+/-0) min 17 vs 8 min 20+/-1 min 45, p=0.02) after the intermittent warm-up at VO2max, the time spent specifically at VO2max in the severe continuous run at vdelta50 was not significantly different.

  17. An automated metrics system to measure and improve the success of laboratory automation implementation.

    PubMed

    Benn, Neil; Turlais, Fabrice; Clark, Victoria; Jones, Mike; Clulow, Stephen

    2007-03-01

    The authors describe a system for collecting usage metrics from widely distributed automation systems. An application that records and stores usage data centrally, calculates run times, and charts the data was developed. Data were collected over 20 months from at least 28 workstations. The application was used to plot bar charts of date versus run time for individual workstations, the automation in a specific laboratory, or automation of a specified type. The authors show that revised user training, redeployment of equipment, and running complimentary processes on one workstation can increase the average number of runs by up to 20-fold and run times by up to 450%. Active monitoring of usage leads to more effective use of automation. Usage data could be used to determine whether purchasing particular automation was a good investment.

  18. Hydrologic Monitoring in the Deep Subsurface to Support Repository Performance

    NASA Astrophysics Data System (ADS)

    Hubbell, J. M.; Heath, G. L.; Scott, C. L.

    2007-12-01

    The INL has installed and operated several vadose and ground water monitoring systems in arid and humid sites to depths of about 200m. Some of these systems have been in continuous operation for over 12 years. It is important that the systems be physically robust, simple, yet versatile enough that it can operate for extended time periods with little or no maintenance. Monitoring instruments are frequently installed and run to characterize the site, collect data during site operation, and continue to run for long-term stewardship, necessitating sensors that can be maintained or serviced. Sensors are carefully chosen based on the perceived data requirements over the life of the site. An emphasis is given on direct measurements such as tensiometers (portable and advanced), neutron probe, drain gauge, temperature, wells or sampling for fluids and gases. Other complementary data can include using TDR/capacitance, radiation detectors, and larger scale geophysical techniques (3-d resistivity and EM) for volumetric measurements. Commercially available instruments may have to be modified for their use at greater depths, to allow multiple instruments in a single borehole or to perform the intended monitoring function. Access tubes (some open at the bottom) can be placed to allow insertion of multiple sensors (radiation, neutron and portable sensors/samplers), future drilling/sampling and to install new instruments at a later time. The installation techniques and backfill materials must be chosen and the measurement technique tested to ensure representative data collection for the parameters of interest. The data collection system can be linked to climatic data (precipitation, barometric pressure, snow depth, runoff, surface water sources) that may influence the site's subsurface hydrology. The instruments are then connected to a real-time automated data collection system that collect, stores, and provides access to the data. These systems have been developed that allow easy access, automatic data quality checks with notification, processing, and presentation of the data in real time through the web. The systems can be designed to manipulate/test the system remotely. Data from several sites will be presented showing that continuous monitoring is necessary to detect rapid changes in the deep vadose zone and ground water at fractured rock sites.

  19. Stride-to-stride variability and complexity between novice and experienced runners during a prolonged run at anaerobic threshold speed.

    PubMed

    Mo, Shiwei; Chow, Daniel H K

    2018-05-19

    Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.

  20. The use of coded PCR primers enables high-throughput sequencing of multiple homolog amplification products by 454 parallel sequencing.

    PubMed

    Binladen, Jonas; Gilbert, M Thomas P; Bollback, Jonathan P; Panitz, Frank; Bendixen, Christian; Nielsen, Rasmus; Willerslev, Eske

    2007-02-14

    The invention of the Genome Sequence 20 DNA Sequencing System (454 parallel sequencing platform) has enabled the rapid and high-volume production of sequence data. Until now, however, individual emulsion PCR (emPCR) reactions and subsequent sequencing runs have been unable to combine template DNA from multiple individuals, as homologous sequences cannot be subsequently assigned to their original sources. We use conventional PCR with 5'-nucleotide tagged primers to generate homologous DNA amplification products from multiple specimens, followed by sequencing through the high-throughput Genome Sequence 20 DNA Sequencing System (GS20, Roche/454 Life Sciences). Each DNA sequence is subsequently traced back to its individual source through 5'tag-analysis. We demonstrate that this new approach enables the assignment of virtually all the generated DNA sequences to the correct source once sequencing anomalies are accounted for (miss-assignment rate<0.4%). Therefore, the method enables accurate sequencing and assignment of homologous DNA sequences from multiple sources in single high-throughput GS20 run. We observe a bias in the distribution of the differently tagged primers that is dependent on the 5' nucleotide of the tag. In particular, primers 5' labelled with a cytosine are heavily overrepresented among the final sequences, while those 5' labelled with a thymine are strongly underrepresented. A weaker bias also exists with regards to the distribution of the sequences as sorted by the second nucleotide of the dinucleotide tags. As the results are based on a single GS20 run, the general applicability of the approach requires confirmation. However, our experiments demonstrate that 5'primer tagging is a useful method in which the sequencing power of the GS20 can be applied to PCR-based assays of multiple homologous PCR products. The new approach will be of value to a broad range of research areas, such as those of comparative genomics, complete mitochondrial analyses, population genetics, and phylogenetics.

Top