Sample records for faster running time

  1. Physiological characteristics of elite short- and long-distance triathletes.

    PubMed

    Millet, Grégoire P; Dréano, Patrick; Bentley, David J

    2003-01-01

    The purpose of this study was to compare the physiological responses in cycling and running of elite short-distance (ShD) and long-distance (LD) triathletes. Fifteen elite male triathletes participating in the World Championships were divided into two groups (ShD and LD) and performed a laboratory trial that comprised submaximal treadmill running, maximal then submaximal ergometry cycling and then an additional submaximal run. "In situ" best ShD triathlon performances were also analysed for each athlete. ShD demonstrated a significantly faster swim time than LD whereas .VO(2max) (ml kg(-1) min(-1)), cycling economy (W l(-1) min(-1)), peak power output (.W(peak),W) and ventilatory threshold (%.VO(2max)) were all similar between ShD and LD. Moreover, there were no differences between the two groups in the change (%) in running economy from the first to the second running bout. Swimming time was correlated to .W(peak)(r=-0.76; P<0.05) and economy ( r=-0.89; P<0.01) in the ShD athletes. Also, cycling time in the triathlon was correlated to .W(peak)(r=-0.83; P<0.05) in LD. In conclusion, ShD triathletes had a faster swimming time but did not exhibit different maximal or submaximal physiological characteristics measured in cycling and running than LD triathletes.

  2. SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.

    PubMed

    Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile

    2015-01-01

    In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.

  3. Sex difference in Double Iron ultra-triathlon performance

    PubMed Central

    2013-01-01

    Background The present study examined the sex difference in swimming (7.8 km), cycling (360 km), running (84 km), and overall race times for Double Iron ultra-triathletes. Methods Sex differences in split times and overall race times of 1,591 men and 155 women finishing a Double Iron ultra-triathlon between 1985 and 2012 were analyzed. Results The annual number of finishes increased linearly for women and exponentially for men. Men achieved race times of 1,716 ± 243 min compared to 1,834 ± 261 min for women and were 118 ± 18 min (6.9%) faster (p < 0.01). Men finished swimming within 156 ± 63 min compared to women with 163 ± 31 min and were 8 ± 32 min (5.1 ± 5.0%) faster (p < 0.01). For cycling, men (852 ± 196 min) were 71 ± 70 min (8.3 ± 3.5%) faster than women (923 ± 126 min) (p < 0.01). Men completed the run split within 710 ± 145 min compared to 739 ± 150 min for women and were 30 ± 5 min (4.2 ± 3.4%) faster (p = 0.03). The annual three fastest men improved race time from 1,650 ± 114 min in 1985 to 1,339 ± 33 min in 2012 (p < 0.01). Overall race time for women remained unchanged at 1,593 ± 173 min with an unchanged sex difference of 27.1 ± 8.6%. In swimming, the split times for the annual three fastest women (148 ± 14 min) and men (127 ± 20 min) remained unchanged with an unchanged sex difference of 26.8 ± 13.5%. In cycling, the annual three fastest men improved the split time from 826 ± 60 min to 666 ± 18 min (p = 0.02). For women, the split time in cycling remained unchanged at 844 ± 54 min with an unchanged sex difference of 25.2 ± 7.3%. In running, the annual fastest three men improved split times from 649 ± 77 min to 532 ± 16 min (p < 0.01). For women, however, the split times remained unchanged at 657 ± 70 min with a stable sex difference of 32.4 ± 12.5%. Conclusions To summarize, the present findings showed that men were faster than women in Double Iron ultra-triathlon, men improved overall race times, cycling and running split times, and the sex difference remained unchanged across years for overall race time and split times. The sex differences for overall race times and split times were higher than reported for Ironman triathlon. PMID:23849631

  4. Is Single-Port Laparoscopy More Precise and Faster with the Robot?

    PubMed

    Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D

    2016-11-01

    Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.

  5. The effects of running cadence manipulation on plantar loading in healthy runners.

    PubMed

    Wellenkotter, J; Kernozek, T W; Meardon, S; Suchomel, T

    2014-08-01

    Our purpose was to evaluate effects of cadence manipulation on plantar loading during running. Participants (n=38) ran on a treadmill at their preferred speed in 3 conditions: preferred, 5% increased, and 5% decreased while measured using in-shoe sensors. Data (contact time [CT], peak force [PF], force time integral [FTI], pressure time integral [PTI] and peak pressure [PP]) were recorded for 30 right footfalls. Multivariate analysis was performed to detect differences in loading between cadences in the total foot and 4 plantar regions. Differences in plantar loading occurred between cadence conditions. Total foot CT and PF were lower with a faster cadence, but no total foot PP differences were observed. Faster cadence reduced CT, pressure and force variables in both the heel and metatarsal regions. Increasing cadence did not elevate metatarsal loads; rather, total foot and all regions were reduced when healthy runners increased their cadence. If a 5% increase in cadence from preferred were maintained over each mile run the impulse at the heel would be reduced by an estimated 565 body weights*s (BW*s) and the metatarsals 140-170 BW*s per mile run despite the increased steps taken. Increasing cadence may benefit overuse injuries associated with elevated plantar loading. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Cultural Diversity in AP Art History

    ERIC Educational Resources Information Center

    Bolte, Frances R.

    2006-01-01

    Teaching AP Art History is like running on a treadmill that is moving faster than a teacher can run. Many teachers are out of breath before the end of the term and wonder how in the world they can cover every chapter. Because time is short and art from pre-history through to the present, including the non-European traditions, must be covered, this…

  7. Improved Algorithms Speed It Up for Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less

  8. The relationship between aerobic fitness and recovery from high-intensity exercise in infantry soldiers.

    PubMed

    Hoffman, J R

    1997-07-01

    The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.

  9. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  10. Analyzing large scale genomic data on the cloud with Sparkhit

    PubMed Central

    Huang, Liren; Krüger, Jan

    2018-01-01

    Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074

  11. Pilot-in-the Loop CFD Method Development

    DTIC Science & Technology

    2016-10-20

    State University. All software supporting piloted simulations must run at real time speeds or faster. This requirement drives the number of...objects in the environment. In turn, this flowfield affects the local aerodynamics of the main rotor blade sections, affecting blade air loads, and...model, empirical models of ground effect and rotor / airframe interactions) are disabled when running in fully coupled mode, so as to not “double count

  12. Improved performance in NASTRAN (R)

    NASA Technical Reports Server (NTRS)

    Chan, Gordon C.

    1989-01-01

    Three areas of improvement in COSMIC/NASTRAN, 1989 release, were incorporated recently that make the analysis program run faster on large problems. Actual log files and actual timings on a few test samples that were run on IBM, CDC, VAX, and CRAY computers were compiled. The speed improvement is proportional to the problem size and number of continuation cards. Vectorizing certain operations in BANDIT, makes BANDIT run twice as fast in some large problems using structural elements with many node points. BANDIT is a built-in NASTRAN processor that optimizes the structural matrix bandwidth. The VAX matrix packing routine BLDPK was modified so that it is now packing a column of a matrix 3 to 9 times faster. The denser and bigger the matrix, the greater is the speed improvement. This improvement makes a host of routines and modules that involve matrix operation run significantly faster, and saves disc space for dense matrices. A UNIX version, converted from 1988 COSMIC/NASTRAN, was tested successfully on a Silicon Graphics computer using the UNIX V Operating System, with Berkeley 4.3 Extensions. The Utility Modules INPUTT5 and OUTPUT5 were expanded to handle table data, as well as matrices. Both INPUTT5 and OUTPUT5 are general input/output modules that read and write FORTRAN files with or without format. More user informative messages are echoed from PARAMR, PARAMD, and SCALAR modules to ensure proper data values and data types being handled. Two new Utility Modules, GINOFILE and DATABASE, were written for the 1989 release. Seven rigid elements are added to COSMIC/NASTRAN. They are: CRROD, CRBAR, CRTRPLT, CRBE1, CRBE2, CRBE3, and CRSPLINE.

  13. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    PubMed Central

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  14. Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.

    PubMed

    Viker, Tomas; Richardson, Matt X

    2013-01-01

    Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.

  15. Evolutionary pattern of improved 1-mile running performance.

    PubMed

    Foster, Carl; de Koning, Jos J; Thiel, Christian

    2014-07-01

    The official world records (WR) for the 1-mile run for men (3:43.13) and for women (4:12.58) have improved 12.2% and 32.3%, respectively, since the first WR recognized by the International Association of Athletics Federations. Previous observations have suggested that the pacing pattern for successive laps is characteristically faster-slower-slowest-faster. However, modeling studies have suggested that uneven energy-output distribution, particularly a high velocity at the end of the race, is essentially wasted kinetic energy that could have been used to finish sooner. Here the authors report that further analysis of the pacing pattern in 32 men's WR races is characterized by a progressive reduction in the within-lap variation of pace, suggesting that improving the WR in the 1-mile run is as much about how energetic resources are managed as about the capacity of the athletes performing the race. In the women's WR races, the pattern of lap times has changed little, probably secondary to a lack of depth in the women's fields. Contemporary WR performances have been achieved a coefficient of variation of lap times on the order of 1.5-3.0%. Reasonable projection suggests that the WR is overdue for improving and may require lap times with a coefficient of variation of ~1%.

  16. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  17. Processing speed in recurrent visual networks correlates with general intelligence.

    PubMed

    Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F

    2007-01-08

    Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.

  18. A comparison of Hispanic middle school students' performance, and perceived and actual physical exertion, on the traditional and treadmill one-mile runs.

    PubMed

    Latham, Daniel T; Hill, Grant M; Petray, Clayre K

    2013-04-01

    The purpose of this study was to assess whether a treadmill mile is an acceptable FitnessGram Test substitute for the traditional one-mile run for middle school boys and girls. Peak heart rate and perceived physical exertion of the participants were also measured to assess students' effort. 48 boys and 40 girls participated, with approximately 85% classified as Hispanic. Boys' mean time for the traditional one-mile run, as well as peak heart rate and perceived exertion, were statistically significantly faster and higher, respectively, than for the treadmill mile. Girls' treadmill mile times were not statistically significantly different from the traditional one-mile run. There were no statistically significant differences for girl's peak heart rate or perceived exertion. The results suggest that providing middle school students a choice of completing the FitnessGram mile run in either traditional one-mile run or treadmill one-mile format may positively affect performance.

  19. Pilot-in-the-Loop CFD Method Development

    DTIC Science & Technology

    2017-02-01

    Penn State University. All software supporting piloted simulations must run at real time speeds or faster. This requirement drives the number of...dynamics of interacting blade tip vortices with a ground plane,” American Helicopter Society 64 th Annual Forum Proceedings, 2008. [2] Johnson, W

  20. Particle-gas dynamics in the protoplanetary nebula

    NASA Technical Reports Server (NTRS)

    Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.

    1991-01-01

    In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.

  1. How Biomechanical Improvements in Running Economy Could Break the 2-hour Marathon Barrier.

    PubMed

    Hoogkamer, Wouter; Kram, Rodger; Arellano, Christopher J

    2017-09-01

    A sub-2-hour marathon requires an average velocity (5.86 m/s) that is 2.5% faster than the current world record of 02:02:57 (5.72 m/s) and could be accomplished with a 2.7% reduction in the metabolic cost of running. Although supporting body weight comprises the majority of the metabolic cost of running, targeting the costs of forward propulsion and leg swing are the most promising strategies for reducing the metabolic cost of running and thus improving marathon running performance. Here, we calculate how much time could be saved by taking advantage of unconventional drafting strategies, a consistent tailwind, a downhill course, and specific running shoe design features while staying within the current International Association of Athletic Federations regulations for record purposes. Specifically, running in shoes that are 100 g lighter along with second-half scenarios of four runners alternately leading and drafting, or a tailwind of 6.0 m/s, combined with a 42-m elevation drop could result in a time well below the 2-hour marathon barrier.

  2. Tactical Behaviors in Men's 800-m Olympic and World-Championship Medalists: A Changing of the Guard.

    PubMed

    Sandford, Gareth N; Pearson, Simon; Allen, Sian V; Malcata, Rita M; Kilding, Andrew E; Ross, Angus; Laursen, Paul B

    2018-02-01

    To assess the longitudinal evolution of tactical behaviors used to medal in men's 800-m Olympic Games (OG) or world-championship (WC) events in the recent competition era (2000-2016). Thirteen OG and WC events were characterized for 1st- and 2nd-lap splits using available footage from YouTube. Positive pacing strategies were defined as a faster 1st lap. Season's best 800-m time and world ranking, reflective of an athlete's "peak condition," were obtained to determine relationships between adopted tactics and physical condition prior to the championships. Seven championship events provided coverage of all medalists to enable determination of average 100-m speed and sector pacing of medalists. From 2011 onward, 800-m OG and WC medalists showed a faster 1st lap by 2.2 ± 1.1 s (mean, ±90% confidence limits; large difference, very likely), contrasting a possibly faster 2nd lap from 2000 to 2009 (0.5, ±0.4 s; moderate difference). A positive pacing strategy was related to a higher world ranking prior to the championships (r = .94, .84-.98; extremely large, most likely). After 2011, the fastest 100-m sector from 800-m OG and WC medalists was faster than before 2009 by 0.5, ±0.2 m/s (large difference, most likely). A secular change in tactical racing behavior appears evident in 800-m championships; since 2011, medalists have largely run faster 1st laps and have faster 100-m sector-speed requirements. This finding may be pertinent for training, tactical preparation, and talent identification of athletes preparing for 800-m running at OGs and WCs.

  3. Material advantage?

    NASA Astrophysics Data System (ADS)

    Haake, Steve

    2012-07-01

    Sprinters are running faster than ever before, but why are javelin throwers not throwing further and swimmers not swimming faster? Steve Haake explains the effects of technology and rule change on sporting performance.

  4. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  5. Predictors of race time in male Ironman triathletes: physical characteristics, training, or prerace experience?

    PubMed

    Knechtle, Beat; Wirth, Andrea; Rosemann, Thomas

    2010-10-01

    The aim of the present study was to assess whether physical characteristics, training, or prerace experience were related to performance in recreational male Ironman triathletes using bi- and multivariate analysis. 83 male recreational triathletes who volunteered to participate in the study (M age 41.5 yr., SD = 8.9) had a mean body height of 1.80 m (SD = 0.06), mean body mass of 77.3 kg (SD = 8.9), and mean Body Mass Index of 23.7 kg/m2 (SD = 2.1) at the 2009 IRONMAN SWITZERLAND competition. Speed in running during training, personal best marathon time, and personal best time in an Olympic distance triathlon were related to the Ironman race time. These three variables explained 64% of the variance in Ironman race time. Personal best marathon time was significantly and positively related to the run split time in the Ironman race. Faster running while training and both a fast personal best time in a marathon and in an Olympic distance triathlon were associated with a fast Ironman race time.

  6. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.

    PubMed

    Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W

    2014-04-01

    This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.

  7. Loading forces in shallow water running in two levels of immersion.

    PubMed

    Haupenthal, Alessandro; Ruschel, Caroline; Hubert, Marcel; de Brito Fontana, Heiliane; Roesler, Helio

    2010-07-01

    To analyse the vertical and anteroposterior components of the ground reaction force during shallow water running at 2 levels of immersion. Twenty-two healthy adults with no gait disorders, who were familiar with aquatic exercises. Subjects performed 6 trials of water running at a self-selected speed in chest and hip immersion. Force data were collected through an underwater force plate and running speed was measured with a photocell timing light system. Analysis of covariance was used for data analysis. Vertical forces corresponded to 0.80 and 0.98 times the subject's body weight at the chest and hip level, respectively. Anteroposterior forces corresponded to 0.26 and 0.31 times the subject's body weight at the chest and hip level, respectively. As the water level decreased the subjects ran faster. No significant differences were found for the force values between the immersions, probably due to variability in speed, which was self-selected. When thinking about load values in water running professionals should consider not only the immersion level, but also the speed, as it can affect the force components, mainly the anteroposterior one. Quantitative data on this subject could help professionals to conduct safer aqua-tic rehabilitation and physical conditioning protocols.

  8. Genetically improved BarraCUDA.

    PubMed

    Langdon, W B; Lam, Brian Yee Hong

    2017-01-01

    BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.

  9. Semiannual Report, Contract Number NAS1-18605, April 1, thru September 30, 1991

    DTIC Science & Technology

    1991-11-01

    one and two dimensional problems are pre- sented. It is shown experimentally that synchronization penalty can be about 50% of run time : in most cases...have resident appointments for limited periods of time , and by consultants. Members of NASA’s research staff also may be residents at ICASE for limited...very important factor in implementing nondestructive evaluation techniques. The latest version of our algorithm is at least four times faster than

  10. ORNL Cray X1 evaluation status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, P.K.; Alexander, R.A.; Apra, E.

    2004-05-01

    On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less

  11. Actual situation analyses of rat-run traffic on community streets based on car probe data

    NASA Astrophysics Data System (ADS)

    Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao

    2017-10-01

    Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.

  12. Ultramarathon runners: nature or nurture?

    PubMed

    Knechtle, Beat

    2012-12-01

    Ultramarathon running is increasingly popular. An ultramarathon is defined as a running event involving distances longer than the length of a traditional marathon of 42.195 km. In ultramarathon races, ~80% of the finishers are men. Ultramarathoners are typically ~45 y old and achieve their fastest running times between 30 and 49 y for men, and between 30 and 54 y for women. Most probably, ultrarunners start with a marathon before competing in an ultramarathon. In ultramarathoners, the number of previously completed marathons is significantly higher than the number of completed marathons in marathoners. However, recreational marathoners have a faster personal-best marathon time than ultramarathoners. Successful ultramarathoners have 7.6 ± 6.3 y of experience in ultrarunning. Ultramarathoners complete more running kilometers in training than marathoners do, but they run more slowly during training than marathoners. To summarize, ultramarathoners are master runners, have a broad experience in running, and prepare differently for an ultramarathon than marathoners do. However, it is not known what motivates male ultramarathoners and where ultramarathoners mainly originate. Future studies need to investigate the motivation of male ultramarathoners, where the best ultramarathoners originate, and whether they prepare by competing in marathons before entering ultramarathons.

  13. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  14. A faster technique for rendering meshes in multiple display systems

    NASA Astrophysics Data System (ADS)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  15. Comparisons of population subgroups performance on a keyboard psychomotor task

    NASA Technical Reports Server (NTRS)

    Stapleford, R. L.

    1973-01-01

    Response time and pass/fail data were obtained from 163 subjects performing a psychomotor task. The basic task comprised a random five digit number briefly displayed to the subject at the start of each trail, and the keyboard on which the subject was to enter the number as fast as he could accurately do so after the display was extinguished. Some tests were run with the addition of a secondary task which required the subject to respond to a displayed light appearing at a random time. Matched pairs of subjects were selected from the group to analyze the effects of age, sex, intelligence, prior keyboard skill, and drinking habits. There was little or no effect due to age or drinking habits. Differences in response time were: average IQ subjects faster than low IQ subjects by 0.5 to 0.6 sec; subjects with prior keyboard skill faster by 0.4 to 0.5 sec; and female subjects faster by 0.2 to 0.3 sec. These effects were generally insensitive to the presence of the secondary task.

  16. Faster and More Accurate Transport Procedures for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.

    2010-01-01

    Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  17. Half-marathoners are younger and slower than marathoners.

    PubMed

    Knechtle, Beat; Nikolaidis, Pantelis T; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A

    2016-01-01

    Age and performance trends of elite and recreational marathoners are well investigated, but not for half-marathoners. We analysed age and performance trends in 508,108 age group runners (125,894 female and 328,430 male half-marathoners and 10,205 female and 43,489 male marathoners) competing between 1999 and 2014 in all flat half-marathons and marathons held in Switzerland using single linear regression analyses, mixed-effects regression analyses and analyses of variance. The number of women and men increased across years in both half-marathons and marathons. There were 12.3 times more female half-marathoners than female marathoners and 7.5 times more male half-marathoners than male marathoners. For both half-marathons and marathons, most of the female and male finishers were recorded in age group 40-44 years. In half-marathons, women (10.29 ± 3.03 km/h) were running 0.07 ± 0.06 km/h faster (p < 0.001) than men (10.22 ± 3.06 km/h). Also in marathon, women (14.77 ± 4.13 km/h) were running 0.28 ± 0.16 km/h faster (p < 0.001) than men (14.48 ± 4.07 km/h). In marathon, women (42.18 ± 10.63 years) were at the same age than men (42.06 ± 10.45 years) (p > 0.05). Also in half-marathon, women (41.40 ± 10.63 years) were at the same age than men (41.31 ± 10.30 years) (p > 0.05). However, women and men marathon runners were older than their counterpart half-marathon runners (p < 0.001). In summary, (1) more athletes competed in half-marathons than in marathons, (2) women were running faster than men, (3) half-marathoners were running slower than marathoners, and (4) half-marathoners were younger than marathoners.

  18. Behavioral assessment of intermittent wheel running and individual housing in mice in the laboratory.

    PubMed

    Pham, Therese M; Brené, Stefan; Baumans, Vera

    2005-01-01

    Physical cage enrichment--exercise devices for rodents in the laboratory--often includes running wheels. This study compared responses of mice in enriched physical and social conditions and in standard social conditions to wheel running, individual housing, and open-field test. The study divided into 6 groups, 48 female BALB/c mice group housed in enriched and standard conditions. On alternate days, the study exposed 2 groups to individual running wheel cages. It intermittently separated from their cage mates and housed individually 2 groups with no running wheels; 2 control groups remained in enriched or standard condition cages. There were no significant differences between enriched and standard group housed mice in alternate days' wheel running. Over time, enriched, group housed mice ran less. Both groups responded similarly to individual housing. In open-field test, mice exposed to individual housing without running wheel moved more and faster than wheel running and home cage control mice. They have lower body weights than group housed and wheel running mice. Intermittent withdrawal of individual housing affects the animals more than other commodities. Wheel running normalizes some effects of intermittent separation from the enriched, social home cage.

  19. Using wheel availability to shape running behavior of the rat towards improved behavioral and neurobiological outcomes.

    PubMed

    Basso, Julia C; Morrell, Joan I

    2017-10-01

    Though voluntary wheel running (VWR) has been used extensively to induce changes in both behavior and biology, little attention has been given to the way in which different variables influence VWR. This lack of understanding has led to an inability to utilize this behavior to its full potential, possibly blunting its effects on the endpoints of interest. We tested how running experience, sex, gonadal hormones, and wheel apparatus influence VWR in a range of wheel access "doses". VWR increases over several weeks, with females eventually running 1.5 times farther and faster than males. Limiting wheel access can be used as a tool to motivate subjects to run but restricts maximal running speeds attained by the rodents. Additionally, circulating gonadal hormones regulate wheel running behavior, but are not the sole basis of sex differences in running. Limitations from previous studies include the predominate use of males, emphasis on distance run, variable amounts of wheel availability, variable light-dark cycles, and possible food and/or water deprivation. We designed a comprehensive set of experiments to address these inconsistencies, providing data regarding the "microfeatures" of running, including distance run, time spent running, running rate, bouting behavior, and daily running patterns. By systematically altering wheel access, VWR behavior can be finely tuned - a feature that we hypothesize is due to its positive incentive salience. We demonstrate how to maximize VWR, which will allow investigators to optimize exercise-induced changes in their behavioral and/or biological endpoints of interest. Published by Elsevier B.V.

  20. Performance and age of African and non-African runners in half- and full marathons held in Switzerland, 2000–2010

    PubMed Central

    Aschmann, André; Knechtle, Beat; Cribari, Marco; Rüst, Christoph Alexander; Onywera, Vincent; Rosemann, Thomas; Lepers, Romuald

    2013-01-01

    Background Endurance running performance of African (AF) and non-African (NAF) athletes is investigated, with better performances seen for Africans. To date, no study has compared the age of peak performance between AF and NAF runners. The present research is an analysis of the age and running performance of top AF and NAF athletes, using the hypothesis that AF athletes were younger and faster than NAF athletes. Methods Age and performance of male and female AF and NAF athletes in half-marathons and marathons held in Switzerland in 2000–2010 were investigated using single and multilevel hierarchical regression analyses. Results For half-marathons, male NAF runners were older than male AF runners (P = 0.02; NAF, 31.1 years ± 6.4 years versus AF, 26.2 years ± 4.9 years), and their running time was longer (P = 0.02; NAF, 65.3 minutes ± 1.7 minutes versus AF, 64.1 minutes ± 0.9 minutes). In marathons, differences between NAF and AF male runners in age (NAF, 33.0 years ± 4.8 years versus AF, 28.6 years ± 3.8 years; P < 0.01) and running time (NAF, 139.5 minutes ± 5.6 minutes versus AF, 133.3 minutes ± 2.7 minutes; P < 0.01) were more pronounced. There was no difference in age (NAF, 31.0 years ± 7.0 years versus AF, 26.7 years ± 6.0 years; P > 0.05) or running time (NAF, 75.0 minutes ± 3.7 minutes versus AF, 75.6 minutes ± 5.3 minutes; P > 0.05) between NAF and AF female half-marathoners. For marathoners, NAF women were older than AF female runners (P = 0.03; NAF, 31.6 years ± 4.8 years versus AF, 27.8 years ± 5.3 years), but their running times were similar (NAF, 162.4 minutes ± 7.2 minutes versus AF, 163.0 minutes ± 7.0 minutes; P > 0.05). Conclusion In Switzerland, the best AF male half-marathoners and marathoners were younger and faster than the NAF counterpart runners. In contrast to the results seen in men, AF and NAF female runners had similar performances. Future studies need to investigate performance and age of AF and NAF marathoners in the World Marathon Majors Series. PMID:24379724

  1. Whole beetroot consumption acutely improves running performance.

    PubMed

    Murphy, Margaret; Eliot, Katie; Heuertz, Rita M; Weiss, Edward

    2012-04-01

    Nitrate ingestion improves exercise performance; however, it has also been linked to adverse health effects, except when consumed in the form of vegetables. The purpose of this study was to determine, in a double-blind crossover study, whether whole beetroot consumption, as a means for increasing nitrate intake, improves endurance exercise performance. Eleven recreationally fit men and women were studied in a double-blind placebo controlled crossover trial performed in 2010. Participants underwent two 5-km treadmill time trials in random sequence, once 75 minutes after consuming baked beetroot (200 g with ≥500 mg nitrate) and once 75 minutes after consuming cranberry relish as a eucaloric placebo. Based on paired t tests, mean running velocity during the 5-km run tended to be faster after beetroot consumption (12.3±2.7 vs 11.9±2.6 km/hour; P=0.06). During the last 1.1 miles (1.8 km) of the 5-km run, running velocity was 5% faster (12.7±3.0 vs 12.1±2.8 km/hour; P=0.02) in the beetroot trial, with no differences in velocity (P≥0.25) in the earlier portions of the 5-km run. No differences in exercise heart rate were observed between trials; however, at 1.8 km into the 5-km run, rating of perceived exertion was lower with beetroot (13.0±2.1 vs 13.7±1.9; P=0.04). Consumption of nitrate-rich, whole beetroot improves running performance in healthy adults. Because whole vegetables have been shown to have health benefits, whereas nitrates from other sources may have detrimental health effects, it would be prudent for individuals seeking performance benefits to obtain nitrates from whole vegetables, such as beetroot. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  2. Running with horizontal pulling forces: the benefits of towing.

    PubMed

    Grabowski, Alena M; Kram, Rodger

    2008-10-01

    Towing, or running with a horizontal pulling force, is a common technique used by adventure racing teams. During an adventure race, the slowest person on a team determines the team's overall performance. To improve overall performance, a faster runner tows a slower runner with an elastic cord attached to their waists. Our purpose was to create and validate a model that predicts the optimal towing force needed by two runners to achieve their best overall performance. We modeled the effects of towing forces between two runners that differ in solo 10-km performance time and/or body mass. We calculated the overall time that could be saved with towing for running distances of 10, 20, and 42.2-km based on equations from previous research. Then, we empirically tested our 10-km model on 15 runners. Towing improved overall running performance considerably and our model accurately predicted this performance improvement. For example, if two runners (a 70 kg runner with a 35 min solo 10-km time and a 70-kg runner with a 50-min solo 10-km time) maintain an optimal towing force throughout a 10-km race, they can improve overall performance by 15%, saving almost 8 min. Ultimately, the race performance time and body mass of each runner determine the optimal towing force.

  3. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  4. Pressure Fluctuation Characteristics of Narrow Gauge Train Running Through Tunnel

    NASA Astrophysics Data System (ADS)

    Suzuki, Masahiro; Sakuma, Yutaka

    Pressure fluctuations on the sides of narrow (1067 mm) gauge trains running in tunnels are measured for the first time to investigate the aerodynamic force acting on the trains. The present measurements are compared with earlier measurements obtained with the Shinkansen trains. The results are as follows: (1) The aerodynamic force, which stems from pressure fluctuations on the sides of cars, puts the energy into the vibration of the car body running through a tunnel. (2) While the pressure fluctuations appear only on one of the two sides of the trains running in double-track tunnels, the fluctuations in opposite phase on both sides in single-track tunnels. (3) The on-track test data of the narrow gauge trains show the same tendency as those of the Shinkansen trains, although it is suggested that the pressure fluctuations develop faster along the narrow gauge trains than the Shinkansen trains.

  5. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  6. Evaluation of the Xeon phi processor as a technology for the acceleration of real-time control in high-order adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine

    2014-08-01

    We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.

  7. Simulation of linear mechanical systems

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.

    1993-01-01

    A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.

  8. A faster 1.375-approximation algorithm for sorting by transpositions.

    PubMed

    Cunha, Luís Felipe I; Kowada, Luis Antonio B; Hausen, Rodrigo de A; de Figueiredo, Celina M H

    2015-11-01

    Sorting by Transpositions is an NP-hard problem for which several polynomial-time approximation algorithms have been developed. Hartman and Shamir (2006) developed a 1.5-approximation [Formula: see text] algorithm, whose running time was improved to O(nlogn) by Feng and Zhu (2007) with a data structure they defined, the permutation tree. Elias and Hartman (2006) developed a 1.375-approximation O(n(2)) algorithm, and Firoz et al. (2011) claimed an improvement to the running time, from O(n(2)) to O(nlogn), by using the permutation tree. We provide counter-examples to the correctness of Firoz et al.'s strategy, showing that it is not possible to reach a component by sufficient extensions using the method proposed by them. In addition, we propose a 1.375-approximation algorithm, modifying Elias and Hartman's approach with the use of permutation trees and achieving O(nlogn) time.

  9. Real-time simulation of an automotive gas turbine using the hybrid computer

    NASA Technical Reports Server (NTRS)

    Costakis, W.; Merrill, W. C.

    1984-01-01

    A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.

  10. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  11. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  12. Are running speeds maximized with simple-spring stance mechanics?

    PubMed

    Clark, Kenneth P; Weyand, Peter G

    2014-09-15

    Are the fastest running speeds achieved using the simple-spring stance mechanics predicted by the classic spring-mass model? We hypothesized that a passive, linear-spring model would not account for the running mechanics that maximize ground force application and speed. We tested this hypothesis by comparing patterns of ground force application across athletic specialization (competitive sprinters vs. athlete nonsprinters, n = 7 each) and running speed (top speeds vs. slower ones). Vertical ground reaction forces at 5.0 and 7.0 m/s, and individual top speeds (n = 797 total footfalls) were acquired while subjects ran on a custom, high-speed force treadmill. The goodness of fit between measured vertical force vs. time waveform patterns and the patterns predicted by the spring-mass model were assessed using the R(2) statistic (where an R(2) of 1.00 = perfect fit). As hypothesized, the force application patterns of the competitive sprinters deviated significantly more from the simple-spring pattern than those of the athlete, nonsprinters across the three test speeds (R(2) <0.85 vs. R(2) ≥ 0.91, respectively), and deviated most at top speed (R(2) = 0.78 ± 0.02). Sprinters attained faster top speeds than nonsprinters (10.4 ± 0.3 vs. 8.7 ± 0.3 m/s) by applying greater vertical forces during the first half (2.65 ± 0.05 vs. 2.21 ± 0.05 body wt), but not the second half (1.71 ± 0.04 vs. 1.73 ± 0.04 body wt) of the stance phase. We conclude that a passive, simple-spring model has limited application to sprint running performance because the swiftest runners use an asymmetrical pattern of force application to maximize ground reaction forces and attain faster speeds. Copyright © 2014 the American Physiological Society.

  13. Sex difference in top performers from Ironman to double deca iron ultra-triathlon

    PubMed Central

    Knechtle, Beat; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A

    2014-01-01

    This study investigated changes in performance and sex difference in top performers for ultra-triathlon races held between 1978 and 2013 from Ironman (3.8 km swim, 180 km cycle, and 42 km run) to double deca iron ultra-triathlon distance (76 km swim, 3,600 km cycle, and 844 km run). The fastest men ever were faster than the fastest women ever for split and overall race times, with the exception of the swimming split in the quintuple iron ultra-triathlon (19 km swim, 900 km cycle, and 210.1 km run). Correlation analyses showed an increase in sex difference with increasing length of race distance for swimming (r2=0.67, P=0.023), running (r2=0.77, P=0.009), and overall race time (r2=0.77, P=0.0087), but not for cycling (r2=0.26, P=0.23). For the annual top performers, split and overall race times decreased across years nonlinearly in female and male Ironman triathletes. For longer distances, cycling split times decreased linearly in male triple iron ultra-triathletes, and running split times decreased linearly in male double iron ultra-triathletes but increased linearly in female triple and quintuple iron ultra-triathletes. Overall race times increased nonlinearly in female triple and male quintuple iron ultra-triathletes. The sex difference decreased nonlinearly in swimming, running, and overall race time in Ironman triathletes but increased linearly in cycling and running and nonlinearly in overall race time in triple iron ultra-triathletes. These findings suggest that women reduced the sex difference nonlinearly in shorter ultra-triathlon distances (ie, Ironman), but for longer distances than the Ironman, the sex difference increased or remained unchanged across years. It seems very unlikely that female top performers will ever outrun male top performers in ultratriathlons. The nonlinear change in speed and sex difference in Ironman triathlon suggests that female and male Ironman triathletes have reached their limits in performance. PMID:25114605

  14. Cumulative loads increase at the knee joint with slow-speed running compared to faster running: a biomechanical study.

    PubMed

    Petersen, Jesper; Sørensen, Henrik; Nielsen, Rasmus Østergaard

    2015-04-01

    Biomechanical cross-sectional study. To investigate the hypothesis that the cumulative load at the knee during running increases as running speed decreases. The knee joint load per stride decreases as running speed decreases. However, by decreasing running speed, the number of strides per given distance is increased. Running a given distance at a slower speed may increase the cumulative load at the knee joint compared with running the same distance at a higher speed, hence increasing the risk of running-related injuries in the knee. Kinematic and ground reaction force data were collected from 16 recreational runners, during steady-state running with a rearfoot strike pattern at 3 different speeds (mean ± SD): 8.02 ± 0.17 km/h, 11.79 ± 0.21 km/h, and 15.78 ± 0.22 km/h. The cumulative load (cumulative impulse) over a 1000-m distance was calculated at the knee joint on the basis of a standard 3-D inverse-dynamics approach. Based on a 1000-m running distance, the cumulative load at the knee was significantly higher at a slow running speed than at a high running speed (relative difference, 80%). The mean load per stride at the knee increased significantly across all biomechanical parameters, except impulse, following an increase in running speed. Slow-speed running decreases knee joint loads per stride and increases the cumulative load at the knee joint for a given running distance compared to faster running. The primary reason for the increase in cumulative load at slower speeds is an increase in number of strides needed to cover the same distance.

  15. Modelling Agent-Environment Interaction in Multi-Agent Simulations with Affordances

    DTIC Science & Technology

    2010-04-01

    allow operations analysts to conduct statistical studies comparing the effectiveness of different systems or tactics in different scenarios. 11 Instead of...in a Monte-Carlo batch mode, producing statistical outcomes for particular measures of effectiveness. They typically also run at many times faster...Combined with annotated signs, the affordances allowed the traveller agents to find their way around the virtual airport and to conduct their business

  16. Leveraging FPGAs for Accelerating Short Read Alignment.

    PubMed

    Arram, James; Kaplan, Thomas; Luk, Wayne; Jiang, Peiyong

    2017-01-01

    One of the key challenges facing genomics today is how to efficiently analyze the massive amounts of data produced by next-generation sequencing platforms. With general-purpose computing systems struggling to address this challenge, specialized processors such as the Field-Programmable Gate Array (FPGA) are receiving growing interest. The means by which to leverage this technology for accelerating genomic data analysis is however largely unexplored. In this paper, we present a runtime reconfigurable architecture for accelerating short read alignment using FPGAs. This architecture exploits the reconfigurability of FPGAs to allow the development of fast yet flexible alignment designs. We apply this architecture to develop an alignment design which supports exact and approximate alignment with up to two mismatches. Our design is based on the FM-index, with optimizations to improve the alignment performance. In particular, the n-step FM-index, index oversampling, a seed-and-compare stage, and bi-directional backtracking are included. Our design is implemented and evaluated on a 1U Maxeler MPC-X2000 dataflow node with eight Altera Stratix-V FPGAs. Measurements show that our design is 28 times faster than Bowtie2 running with 16 threads on dual Intel Xeon E5-2640 CPUs, and nine times faster than Soap3-dp running on an NVIDIA Tesla C2070 GPU.

  17. The FASTER Approach: A New Tool for Calculating Real-Time Tsunami Flood Hazards

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Cross, A.; Johnson, L.; Miller, K.; Nicolini, T.; Whitmore, P.

    2014-12-01

    In the aftermath of the 2010 Chile and 2011 Japan tsunamis that struck the California coastline, emergency managers requested that the state tsunami program provide more detailed information about the flood potential of distant-source tsunamis well ahead of their arrival time. The main issue is that existing tsunami evacuation plans call for evacuation of the predetermined "worst-case" tsunami evacuation zone (typically at a 30- to 50-foot elevation) during any "Warning" level event; the alternative is to not call an evacuation at all. A solution to provide more detailed information for secondary evacuation zones has been the development of tsunami evacuation "playbooks" to plan for tsunami scenarios of various sizes and source locations. To determine a recommended level of evacuation during a distant-source tsunami, an analytical tool has been developed called the "FASTER" approach, an acronym for factors that influence the tsunami flood hazard for a community: Forecast Amplitude, Storm, Tides, Error in forecast, and the Run-up potential. Within the first couple hours after a tsunami is generated, the National Tsunami Warning Center provides tsunami forecast amplitudes and arrival times for approximately 60 coastal locations in California. At the same time, the regional NOAA Weather Forecast Offices in the state calculate the forecasted coastal storm and tidal conditions that will influence tsunami flooding. Providing added conservatism in calculating tsunami flood potential, we include an error factor of 30% for the forecast amplitude, which is based on observed forecast errors during recent events, and a site specific run-up factor which is calculated from the existing state tsunami modeling database. The factors are added together into a cumulative FASTER flood potential value for the first five hours of tsunami activity and used to select the appropriate tsunami phase evacuation "playbook" which is provided to each coastal community shortly after the forecast is provided.

  18. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices

    PubMed Central

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.

    2018-01-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431

  19. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.

    PubMed

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B

    2017-06-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.

  20. The effect of experience on the hunting success of newly emerged spiderlings.

    PubMed

    Morse

    2000-12-01

    Initial interactions with prey may affect a predator's subsequent foraging success. With experience, second-instar Misumena vatia spiderlings (Thomisidae) that had recently emerged from their egg sacs oriented faster to fruit flies (Drosophila melanogaster) than näive individuals. Orientation time of these spiderlings decreased rapidly for the first two to three runs (every third day) in a simple laboratory setting, and then remained low and relatively constant. Time to capture a fly also declined initially, but subsequently became extremely variable, increasing prior to moult. Increase in capture time and the failure to capture prey appeared associated with impending moult, rather than satiation. Spiderlings oriented to prey more rapidly at the beginning of the third instar than at the start of the second instar, suggesting that experience still enhanced performance after a moult cycle. Overall capture times at the beginning of the third instar decreased from those at the end of the second instar, but did not differ significantly from the beginning of the second instar, although spiderlings gaining the most biomass had the shortest mean capture times. In a second experiment, time to orient and time to capture prey did not differ in näive, second-instar siblings run 1 and 3 days after emergence from their egg sacs. However, 3-day individuals that had captured prey each day (confiscated before they could feed) oriented faster than näive 3-day-old siblings, but did not differ in the time taken to capture prey. Experience, rather than age or energetic condition, best explains these changes in performance. Copyright 2000 The Association for the Study of Animal Behaviour.

  1. Implementation of a multi-threaded framework for large-scale scientific applications

    DOE PAGES

    Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...

    2015-05-22

    The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less

  2. The Impact of a Food Elimination Diet on Collegiate Athletes' 300-meter Run Time and Concentration

    PubMed Central

    Breshears, Karen; Baker, David McA.

    2014-01-01

    Background: Optimal human function and performance through diet strategies are critical for everyone but especially for those involved in collegiate or professional athletics. Currently, individualized medicine (IM) is emerging as a more efficacious approach to health with emphasis on personalized diet strategies for the public and is common practice for elite athletes. One method for directing patient-specific foods in the diet, while concomitantly impacting physical performance, may be via IgG food sensitivity and Candida albicans analysis from dried blood spot (DBS) collections. Methods: The authors designed a quasi-experimental, nonrandomized, pilot study without a control group. Twenty-three participants, 15 female, 8 male, from soccer/volleyball and football athletic teams, respectively, mean age 19.64+0.86 years, were recruited for the study, which examined preposttest 300-meter run times and questionnaire responses after a 14-day IgG DBS–directed food elimination diet based on IgG reactivity to 93 foods. DBS specimen collection, 300-meter run times, and Learning Difficulties Assessment (LDA) questionnaires were collected at the participants' university athletics building on campus. IgG, C albicans, and S cerevisiae analyses were conducted at the Great Plains Laboratory, Lenexa, Kansas. Results: Data indicated a change in 300-meter run time but not of statistical significance (run time baseline mean=50.41 sec, run time intervention mean=50.14 sec). Descriptive statistics for frequency of responses and chi-square analysis revealed that 4 of the 23 items selected from the LDA (Listening-Memory and Concentration subscale R=.8669; Listening-Information Processing subscale R=.8517; and General Concentration and Memory subscale R=.9019) were improved posttest. Conclusion: The study results did not indicate merit in eliminating foods based on IgG reactivity for affecting athletic performance (faster 300-meter run time) but did reveal potential for affecting academic qualities of listening, information processing, concentration, and memory. Further studies are warranted evaluating IgG-directed food elimination diets for improving run time, concentration, and memory among college athletes as well as among other populations. PMID:25568830

  3. Prediction of toxic metals concentration using artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Gholami, R.; Kamkar-Rouhani, A.; Doulati Ardejani, F.; Maleki, Sh.

    2011-12-01

    Groundwater and soil pollution are noted to be the worst environmental problem related to the mining industry because of the pyrite oxidation, and hence acid mine drainage generation, release and transport of the toxic metals. The aim of this paper is to predict the concentration of Ni and Fe using a robust algorithm named support vector machine (SVM). Comparison of the obtained results of SVM with those of the back-propagation neural network (BPNN) indicates that the SVM can be regarded as a proper algorithm for the prediction of toxic metals concentration due to its relative high correlation coefficient and the associated running time. As a matter of fact, the SVM method has provided a better prediction of the toxic metals Fe and Ni and resulted the running time faster compared with that of the BPNN.

  4. Hiding the Disk and Network Latency of Out-of-Core Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David

    2001-01-01

    This paper describes an algorithm that improves the performance of application-controlled demand paging for out-of-core visualization by hiding the latency of reading data from both local disks or disks on remote servers. The performance improvements come from better overlapping the computation with the page reading process, and by performing multiple page reads in parallel. The paper includes measurements that show that the new multithreaded paging algorithm decreases the time needed to compute visualizations by one third when using one processor and reading data from local disk. The time needed when using one processor and reading data from remote disk decreased by two thirds. Visualization runs using data from remote disk actually ran faster than ones using data from local disk because the remote runs were able to make use of the remote server's high performance disk array.

  5. Similarities and differences in anthropometry and training between recreational male 100-km ultra-marathoners and marathoners.

    PubMed

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas

    2012-01-01

    Several recent investigations showed that the best marathon time of an individual athlete is also a strong predictor variable for the race time in a 100-km ultra-marathon. We investigated similarities and differences in anthropometry and training characteristics between 166 100-km ultra-marathoners and 126 marathoners in recreational male athletes. The association of anthropometric variables and training characteristics with race time was assessed by using bi- and multi-variate analysis. Regarding anthropometry, the marathoners had a significantly lower calf circumference (P < 0.05) and a significantly thicker skinfold at pectoral (P < 0.01), axilla (P < 0.05), and suprailiacal sites (P < 0.05) compared to the ultra-marathoners. Considering training characteristics, the marathoners completed significantly fewer hours (P < 0.001) and significantly fewer kilometres (P < 0.001) during the week, but they were running significantly faster during training (P < 0.001). The multi-variate analysis showed that age (P < 0.0001), body mass (P = 0.011), and percent body fat (P = 0.019) were positively and weekly running kilometres (P < 0.0001) were negatively related to 100-km race times in the ultra-marathoners. In the marathoners, percent body fat (P = 0.002) was positively and speed in running training (P < 0.0001) was negatively associated with marathon race times. In conclusion, these data suggest that performance in both marathoners and 100-km ultra-marathoners is inversely related to body fat. Moreover, marathoners rely more on speed in running during training whereas ultra-marathoners rely on volume in running training.

  6. NEQAIR96,Nonequilibrium and Equilibrium Radiative Transport and Spectra Program: User's Manual

    NASA Technical Reports Server (NTRS)

    Whiting, Ellis E.; Park, Chul; Liu, Yen; Arnold, James O.; Paterson, John A.

    1996-01-01

    This document is the User's Manual for a new version of the NEQAIR computer program, NEQAIR96. The program is a line-by-line and a line-of-sight code. It calculates the emission and absorption spectra for atomic and diatomic molecules and the transport of radiation through a nonuniform gas mixture to a surface. The program has been rewritten to make it easy to use, run faster, and include many run-time options that tailor a calculation to the user's requirements. The accuracy and capability have also been improved by including the rotational Hamiltonian matrix formalism for calculating rotational energy levels and Hoenl-London factors for dipole and spin-allowed singlet, doublet, triplet, and quartet transitions. Three sample cases are also included to help the user become familiar with the steps taken to produce a spectrum. A new user interface is included that uses check location, to select run-time options and to enter selected run data, making NEQAIR96 easier to use than the older versions of the code. The ease of its use and the speed of its algorithms make NEQAIR96 a valuable educational code as well as a practical spectroscopic prediction and diagnostic code.

  7. Technology Insertion Engineering Services Masking Process Evaluation Task Order No. 7. (Phase 1). Revision B

    DTIC Science & Technology

    1989-10-06

    spent pumice cleaning. All parts can be pumice cleaned faster by using the method described in Quick Fix Plan paragraph 6.0. Soaking the scrubbed masked...times were run at an unusuall fast pace. For two other days workers were observed masking parts and by excluding the time spent talking and working on...stop-off Lacquer, MICCROSTOP REDUCER is recom. mended. Also, a shot soak in caustic cleaner, both at 212’ F., will break the adhesion and the coating is

  8. Qubits and quantum Hamiltonian computing performances for operating a digital Boolean 1/2-adder

    NASA Astrophysics Data System (ADS)

    Dridi, Ghassen; Faizy Namarvar, Omid; Joachim, Christian

    2018-04-01

    Quantum Boolean (1 + 1) digits 1/2-adders are designed with 3 qubits for the quantum computing (Qubits) and 4 quantum states for the quantum Hamiltonian computing (QHC) approaches. Detailed analytical solutions are provided to analyse the time operation of those different 1/2-adder gates. QHC is more robust to noise than Qubits and requires about the same amount of energy for running its 1/2-adder logical operations. QHC is faster in time than Qubits but its logical output measurement takes longer.

  9. Shark: SQL and Rich Analytics at Scale

    DTIC Science & Technology

    2012-11-26

    learning programs up to 100 faster than Hadoop. Unlike previous systems, Shark shows that it is possible to achieve these speedups while retaining a...Shark to run SQL queries up to 100× faster than Apache Hive, and machine learning programs up to 100× faster than Hadoop. Unlike previous systems, Shark...so using a runtime that is optimized for such workloads and a programming model that is designed to express machine learn - ing algorithms. 4.1

  10. Efficient Delaunay Tessellation through K-D Tree Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozov, Dmitriy; Peterka, Tom

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less

  11. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    PubMed Central

    2011-01-01

    Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance. PMID:21631914

  12. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    PubMed

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  13. Self-Reported Measures Of Strength And Sport-Specific Skills Distinguish Ranking In An International Online Fitness Competition.

    PubMed

    Serafini, Paul R; Feito, Yuri; Mangine, Gerald T

    2017-02-08

    To determine if self-reported performance measures could distinguish ranking during the 2016 CrossFit Open, data from three thousand male (n=1500; 27.2±8.4 y; 85.2±7.9 kg; 177.0±6.5 cm) and female (n=1500, 28.7±4.9 y; 63.7±5.8 kg; 163.7±6.6 cm) competitors was used for this study. Competitors were split by gender and grouped into quintiles (Q1-Q5) based upon their final ranking. Quintiles were compared for one-repetition maximum (1RM) squat, deadlift, clean and jerk (CJ), snatch, 400-m sprint, 5,000-m run, and benchmark workouts (Fran, Helen, Grace, Filthy-50, and Fight-gone-bad). Separate one-way analyses of variance revealed all competitors in Q1 reported greater (p<0.05) 1RM loads for squat (Males: 201.6±19.1 kg; Females: 126.1±13.0 kg), deadlift (Males: 232.4±20.5 kg; Females: 148.3±14.5 kg), CJ (Males: 148.9±12.1 kg; Females: 95.7±8.4 kg), and snatch (Males: 119.4±10.9 kg; Females 76.5±7.6 kg) compared to other quintiles. In addition, Males in Q1 (59.3±5.9 sec) reported faster (p<0.05) 400-m times than Q3 only (62.6±7.3 sec), but were not different from any group in the 5,000-m run. Females in Q2 (67.5 ± 8.8 sec) reported faster (p<0.05) 400-m times than Q3-Q5 (73.5-74.8 sec) and faster (21.3 ± 1.8 min, p<0.02) 5,000-m times than Q4 (22.6±2.2 min) and Q5 (22.6±1.9 min). Faster (p<0.05) Fran times were reported by Q1 (males: 138.2±13.3 sec; females: 159.4±28.3 sec) compared to other groups, while the results of other workouts were variable. These data indicate that the most successful athletes excel in all areas of fitness/skill, while lower-ranking athletes should focus on developing strength and power after achieving sufficient proficiency in sport-specific skills.

  14. Regulation of substrate use during the marathon.

    PubMed

    Spriet, Lawrence L

    2007-01-01

    The energy required to run a marathon is mainly provided through oxidative phosphorylation in the mitochondria of the active muscles. Small amounts of energy from substrate phosphorylation are also required during transitions and short periods when running speed is increased. The three inputs for adenosine triphosphate production in the mitochondria include oxygen, free adenosine diphosphate and inorganic phosphate, and reducing equivalents. The reducing equivalents are derived from the metabolism of fat and carbohydrate (CHO), which are mobilised from intramuscular stores and also delivered from adipose tissue and liver, respectively. The metabolism of fat and CHO is tightly controlled at several regulatory sites during marathon running. Slower, recreational runners run at 60-65% maximal oxygen uptake (VO(2max)) for approximately 3:45:00 and faster athletes run at 70-75% for approximately 2:45:00. Both groups rely heavily on fat and CHO fuels. However, elite athletes run marathons at speeds requiring between 80% and 90% VO(2max), and finish in times between 2:05:00 and 2:20:00. They are highly adapted to oxidise fat and must do so during training. However, they compete at such high running speeds, that CHO oxidation (also highly adapted) may be the exclusive source of energy while racing. Further work with elite athletes is needed to examine this possibility.

  15. It's About Time:Mark Twain's ``My Watch'' and Relativity

    NASA Astrophysics Data System (ADS)

    Henderson, Hugh

    2005-09-01

    Over three decades before Einstein's year of miracles, the American humorist Mark Twain published an essay titled "My Watch," in which he recounts his experiences with a previously reliable pocket watch and those who tried to rehabilitate it. He begins his essay by confessing his first error: My beautiful new watch had run eighteen months without losing or gaining, and without breaking any part of its machinery or stopping. I had come to believe it infallible in its judgments about the time of day, and to consider its constitution and its anatomy imperishable. But at last, one night, I let it run down. I grieved about it as if it were a recognized messenger and forerunner of calamity. Twain then sets the watch by guess, and takes it to the "chief jeweler's to set it by the exact time." To Twain's dismay, the jeweler insists on opening it up and adjusting the regulator inside the watch, and the watch begins to gain time. It gained faster and faster day by day. Within a week it sickened to a raging fever, and its pulse went up to a hundred and fifty in the shade. At the end of two months, it had left all the timepieces of the town far in the rear, and was a fraction over thirteen days ahead of the almanac. It was away into November enjoying the snow, while the October leaves were still turning. It hurried up house rent, bills payable, and such things, in such a ruinous way that I could not abide it.

  16. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  17. Compartmentalized self-replication under fast PCR cycling conditions yields Taq DNA polymerase mutants with increased DNA-binding affinity and blood resistance.

    PubMed

    Arezi, Bahram; McKinney, Nancy; Hansen, Connie; Cayouette, Michelle; Fox, Jeffrey; Chen, Keith; Lapira, Jennifer; Hamilton, Sarah; Hogrefe, Holly

    2014-01-01

    Faster-cycling PCR formulations, protocols, and instruments have been developed to address the need for increased throughput and shorter turn-around times for PCR-based assays. Although run times can be cut by up to 50%, shorter cycle times have been correlated with lower detection sensitivity and increased variability. To address these concerns, we applied Compartmentalized Self Replication (CSR) to evolve faster-cycling mutants of Taq DNA polymerase. After five rounds of selection using progressively shorter PCR extension times, individual mutations identified in the fastest-cycling clones were randomly combined using ligation-based multi-site mutagenesis. The best-performing combinatorial mutants exhibit 35- to 90-fold higher affinity (lower Kd ) for primed template and a moderate (2-fold) increase in extension rate compared to wild-type Taq. Further characterization revealed that CSR-selected mutations provide increased resistance to inhibitors, and most notably, enable direct amplification from up to 65% whole blood. We discuss the contribution of individual mutations to fast-cycling and blood-resistant phenotypes.

  18. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  19. Mercury BLASTP: Accelerating Protein Sequence Alignment

    PubMed Central

    Jacob, Arpith; Lancaster, Joseph; Buhler, Jeremy; Harris, Brandon; Chamberlain, Roger D.

    2008-01-01

    Large-scale protein sequence comparison is an important but compute-intensive task in molecular biology. BLASTP is the most popular tool for comparative analysis of protein sequences. In recent years, an exponential increase in the size of protein sequence databases has required either exponentially more running time or a cluster of machines to keep pace. To address this problem, we have designed and built a high-performance FPGA-accelerated version of BLASTP, Mercury BLASTP. In this paper, we describe the architecture of the portions of the application that are accelerated in the FPGA, and we also describe the integration of these FPGA-accelerated portions with the existing BLASTP software. We have implemented Mercury BLASTP on a commodity workstation with two Xilinx Virtex-II 6000 FPGAs. We show that the new design runs 11-15 times faster than software BLASTP on a modern CPU while delivering close to 99% identical results. PMID:19492068

  20. A stitch in time saves nine: suture technique does not affect intestinal growth in a young, growing animal model.

    PubMed

    Gurien, Lori A; Wyrick, Deidre L; Smith, Samuel D; Maxson, R Todd

    2016-05-01

    Although this issue remains unexamined, pediatric surgeons commonly use simple interrupted suture for bowel anastomosis, as it is thought to improve intestinal growth postoperatively compared to continuous running suture. However, effects on intestinal growth are unclear. We compared intestinal growth using different anastomotic techniques during the postoperative period in young rats. Young, growing rats underwent small bowel transection and anastomosis using either simple interrupted or continuous running technique. At 7-weeks postoperatively after a four-fold growth, the anastomotic site was resected. Diameters and burst pressures were measured. Thirteen rats underwent anastomosis with simple interrupted technique and sixteen with continuous running method. No differences were found in body weight at first (102.46 vs 109.75g) or second operations (413.85 vs 430.63g). Neither the diameters (0.69 vs 0.79cm) nor burst pressures were statistically different, although the calculated circumference was smaller in the simple interrupted group (2.18 vs 2.59cm; p=0.03). No ruptures occurred at the anastomotic line. This pilot study is the first to compare continuous running to simple interrupted intestinal anastomosis in a pediatric model and showed no difference in growth. Adopting continuous running techniques for bowel anastomosis in young children may lead to faster operative time without affecting intestinal growth. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  2. Physiological and Biomechanical Responses of Highly Trained Distance Runners to Lower-Body Positive Pressure Treadmill Running.

    PubMed

    Barnes, Kyle R; Janecke, Jessica N

    2017-11-21

    As a way to train at faster running speeds, add training volume, prevent injury, or rehabilitate after an injury, lower-body positive pressure treadmills (LBPPT) have become increasingly commonplace among athletes. However, there are conflicting evidence and a paucity of data describing the physiological and biomechanical responses to LBPPT running in highly trained or elite caliber runners at the running speeds they habitually train at, which are considerably faster than those of recreational runners. Furthermore, data is lacking regarding female runners' responses to LBPPT running. Therefore, this study was designed to evaluate the physiological and biomechanical responses to LBPPT running in highly trained male and female distance runners. Fifteen highly trained distance runners (seven male; eight female) completed a single running test composed of 4 × 9-min interval series at fixed percentages of body weight ranging from 0 to 30% body weight support (BWS) in 10% increments on LBPPT. The first interval was always conducted at 0% BWS; thereafter, intervals at 10, 20, and 30% BWS were conducted in random order. Each interval consisted of three stages of 3 min each, at velocities of 14.5, 16.1, and 17.7 km·h -1 for men and 12.9, 14.5, and 16.1 km·h -1 for women. Expired gases, ventilation, breathing frequency, heart rate (HR), rating of perceived exertion (RPE), and stride characteristics were measured during each running speed and BWS. Male and female runners had similar physiological and biomechanical responses to running on LBPPT. Increasing BWS increased stride length (p < 0.02) and flight duration (p < 0.01) and decreased stride rate (p < 0.01) and contact time (p < 0.01) in small-large magnitudes. There was a large attenuation of oxygen consumption (VO 2 ) relative to BWS (p < 0.001), while there were trivial-moderate reductions in respiratory exchange ratio, minute ventilation, and respiratory frequency (p > 0.05), and small-large effects on HR and RPE (p < 0.01). There were trivial-small differences in V E , respiratory frequency, HR, and RPE for a given VO 2 across various BWS (p > 0.05). The results indicate the male and female distance runners have similar physiological and biomechanical responses to LBPPT running. Overall, the biomechanical changes during LBPPT running all contributed to less metabolic cost and corresponding physiological changes.

  3. How do prosthetic stiffness, height and running speed affect the biomechanics of athletes with bilateral transtibial amputations?

    PubMed Central

    Taboga, Paolo; Grabowski, Alena M.

    2017-01-01

    Limited available information describes how running-specific prostheses and running speed affect the biomechanics of athletes with bilateral transtibial amputations. Accordingly, we quantified the effects of prosthetic stiffness, height and speed on the biomechanics of five athletes with bilateral transtibial amputations during treadmill running. Each athlete performed a set of running trials with 15 different prosthetic model, stiffness and height combinations. Each set of trials began with the athlete running on a force-measuring treadmill at 3 m s−1, subsequent trials incremented by 1 m s−1 until they achieved their fastest attainable speed. We collected ground reaction forces (GRFs) during each trial. Prosthetic stiffness, height and running speed each affected biomechanics. Specifically, with stiffer prostheses, athletes exhibited greater peak and stance average vertical GRFs (β = 0.03; p < 0.001), increased overall leg stiffness (β = 0.21; p < 0.001), decreased ground contact time (β = −0.07; p < 0.001) and increased step frequency (β = 0.042; p < 0.001). Prosthetic height inversely associated with step frequency (β = −0.021; p < 0.001). Running speed inversely associated with leg stiffness (β = −0.58; p < 0.001). Moreover, at faster running speeds, the effect of prosthetic stiffness and height on biomechanics was mitigated and unchanged, respectively. Thus, prosthetic stiffness, but not height, likely influences distance running performance more than sprinting performance for athletes with bilateral transtibial amputations. PMID:28659414

  4. How do prosthetic stiffness, height and running speed affect the biomechanics of athletes with bilateral transtibial amputations?

    PubMed

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-06-01

    Limited available information describes how running-specific prostheses and running speed affect the biomechanics of athletes with bilateral transtibial amputations. Accordingly, we quantified the effects of prosthetic stiffness, height and speed on the biomechanics of five athletes with bilateral transtibial amputations during treadmill running. Each athlete performed a set of running trials with 15 different prosthetic model, stiffness and height combinations. Each set of trials began with the athlete running on a force-measuring treadmill at 3 m s -1 , subsequent trials incremented by 1 m s -1 until they achieved their fastest attainable speed. We collected ground reaction forces (GRFs) during each trial. Prosthetic stiffness, height and running speed each affected biomechanics. Specifically, with stiffer prostheses, athletes exhibited greater peak and stance average vertical GRFs ( β = 0.03; p < 0.001), increased overall leg stiffness ( β = 0.21; p < 0.001), decreased ground contact time ( β = -0.07; p < 0.001) and increased step frequency ( β = 0.042; p < 0.001). Prosthetic height inversely associated with step frequency ( β = -0.021; p < 0.001). Running speed inversely associated with leg stiffness ( β = -0.58; p < 0.001). Moreover, at faster running speeds, the effect of prosthetic stiffness and height on biomechanics was mitigated and unchanged, respectively. Thus, prosthetic stiffness, but not height, likely influences distance running performance more than sprinting performance for athletes with bilateral transtibial amputations. © 2017 The Author(s).

  5. Betalain-rich concentrate supplementation improves exercise performance and recovery in competitive triathletes.

    PubMed

    Montenegro, Cristhian F; Kwong, David A; Minow, Zev A; Davis, Brian A; Lozada, Christina F; Casazza, Gretchen A

    2017-02-01

    We aimed to determine the effects of a betalain-rich concentrate (BRC) of beetroots, containing no sugars or nitrates, on exercise performance and recovery. Twenty-two (9 men and 13 women) triathletes (age, 38 ± 11 years) completed 2 double-blind, crossover, randomized trials (BRC and placebo) starting 7 days apart. Each trial was preceded by 6 days of supplementation with 100 mg·day -1 of BRC or placebo. On the 7th day of supplementation, exercise trials commenced 120 min after ingestion of 50 mg BRC or placebo and consisted of 40 min of cycling (75 ± 5% maximal oxygen consumption) followed by a 10-km running time trial (TT). Subjects returned 24 h later to complete a 5-km running TT to assess recovery. Ten-kilometer TT duration (49.5 ± 8.9 vs. 50.8 ± 10.3 min, p = 0.03) was faster with the BRC treatment. Despite running faster, average heart rate and ratings of perceived exertion were not different between treatments. Five-kilometer TT duration (23.2 ± 4.4 vs 23.9 ± 4.7 min, p = 0.003), 24 h after the 10-km TT, was faster in 17 of the 22 subjects with the BRC treatment. Creatine kinase, a muscle damage marker, increased less (40.5 ± 22.5 vs. 49.7 ± 21.5 U·L -1 , p = 0.02) from baseline to after the 10-km TT and subjective fatigue increased less (-0.05 ± 6.1 vs. 3.23 ± 6.1, p = 0.05) from baseline to 24 h after the 10-km TT with BRC. In conclusion, BRC supplementation improved 10-km TT performance in competitive male and female triathletes. Improved 5-km TT performances 24 h after the 10-km TT and the attenuated increase of creatine kinase and fatigue suggest an increase in recovery while taking BRC.

  6. How should I regulate my emotions if I want to run faster?

    PubMed

    Lane, Andrew M; Devonport, Tracey J; Friesen, Andrew P; Beedie, Christopher J; Fullerton, Christopher L; Stanley, Damian M

    2016-01-01

    The present study investigated the effects of emotion regulation strategies on self-reported emotions and 1600 m track running performance. In stage 1 of a three-stage study, participants (N = 15) reported emotional states associated with best, worst and ideal performance. Results indicated that a best and ideal emotional state for performance composed of feeling happy, calm, energetic and moderately anxious whereas the worst emotional state for performance composed of feeling downhearted, sluggish and highly anxious. In stage 2, emotion regulation interventions were developed using online material and supported by electronic feedback. One intervention motivated participants to increase the intensity of unpleasant emotions (e.g. feel more angry and anxious). A second intervention motivated participants to reduce the intensity of unpleasant emotions (e.g. feel less angry and anxious). In stage 3, using a repeated measures design, participants used each intervention before running a 1600 m time trial. Data were compared with a no treatment control condition. The intervention designed to increase the intensity of unpleasant emotions resulted in higher anxiety and lower calmness scores but no significant effects on 1600 m running time. The intervention designed to reduce the intensity of unpleasant emotions was associated with significantly slower times for the first 400 m. We suggest future research should investigate emotion regulation, emotion and performance using quasi-experimental methods with performance measures that are meaningful to participants.

  7. Locomotor trade-offs in mice selectively bred for high voluntary wheel running.

    PubMed

    Dlugosz, Elizabeth M; Chappell, Mark A; McGillivray, David G; Syme, Douglas A; Garland, Theodore

    2009-08-01

    We investigated sprint performance and running economy of a unique ;mini-muscle' phenotype that evolved in response to selection for high voluntary wheel running in laboratory mice (Mus domesticus). Mice from four replicate selected (S) lines run nearly three times as far per day as four control lines. The mini-muscle phenotype, resulting from an initially rare autosomal recessive allele, has been favoured by the selection protocol, becoming fixed in one of the two S lines in which it occurred. In homozygotes, hindlimb muscle mass is halved, mass-specific muscle oxidative capacity is doubled, and the medial gastrocnemius exhibits about half the mass-specific isotonic power, less than half the mass-specific cyclic work and power, but doubled fatigue resistance. We hypothesized that mini-muscle mice would have a lower whole-animal energy cost of transport (COT), resulting from lower costs of cycling their lighter limbs, and reduced sprint speed, from reduced maximal force production. We measured sprint speed on a racetrack and slopes (incremental COT, or iCOT) and intercepts of the metabolic rate versus speed relationship during voluntary wheel running in 10 mini-muscle and 20 normal S-line females. Mini-muscle mice ran faster and farther on wheels, but for less time per day. Mini-muscle mice had significantly lower sprint speeds, indicating a functional trade-off. However, contrary to predictions, mini-muscle mice had higher COT, mainly because of higher zero-speed intercepts and postural costs (intercept-resting metabolic rate). Thus, mice with altered limb morphology after intense selection for running long distances do not necessarily run more economically.

  8. Self-Motion Perception during Locomotor Recalibration: More than Meets the Eye

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Pelah, Adar; Fox, Laura F.; Lewis, Jed; Kane, Rachel; Walley, Katherine A.

    2005-01-01

    Do locomotor after effects depend specifically on visual feedback? In 7 experiments, 116 college students were tested, with closed eyes, at stationary running or at walking to a previewed target after adaptation, with closed eyes, to treadmill locomotion. Subjects showed faster inadvertent drift during stationary running and increased distance…

  9. Relationships between triathlon performance and pacing strategy during the run in an international competition.

    PubMed

    Le Meur, Yann; Bernard, Thierry; Dorel, Sylvain; Abbiss, Chris R; Honnorat, Gérard; Brisswalter, Jeanick; Hausswirth, Christophe

    2011-06-01

    The purpose of the present study was to examine relationships between athlete's pacing strategies and running performance during an international triathlon competition. Running split times for each of the 107 finishers of the 2009 European Triathlon Championships (42 females and 65 males) were determined with the use of a digital synchronized video analysis system. Five cameras were placed at various positions of the running circuit (4 laps of 2.42 km). Running speed and an index of running speed variability (IRSVrace) were subsequently calculated over each section or running split. Mean running speed over the first 1272 m of lap 1 was 0.76 km·h-1 (+4.4%) and 1.00 km·h-1 (+5.6%) faster than the mean running speed over the same section during the three last laps, for females and males, respectively (P < .001). A significant inverse correlation was observed between RSrace and IRSVrace for all triathletes (females r = -0.41, P = .009; males r = -0.65, P = .002; and whole population -0.76, P = .001). Females demonstrated higher IRSVrace compared with men (6.1 ± 0.5 km·h-1 and 4.0 ± 1.4 km·h-1, for females and males, respectively, P = .001) due to greater decrease in running speed over uphill sections. Pacing during the run appears to play a key role in high-level triathlon performance. Elite triathletes should reduce their initial running speed during international competitions, even if high levels of motivation and direct opponents lead them to adopt an aggressive strategy.

  10. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  11. Automated acoustic localization and call association for vocalizing humpback whales on the Navy's Pacific Missile Range Facility.

    PubMed

    Helble, Tyler A; Ierley, Glenn R; D'Spain, Gerald L; Martin, Stephen W

    2015-01-01

    Time difference of arrival (TDOA) methods for acoustically localizing multiple marine mammals have been applied to recorded data from the Navy's Pacific Missile Range Facility in order to localize and track humpback whales. Modifications to established methods were necessary in order to simultaneously track multiple animals on the range faster than real-time and in a fully automated way, while minimizing the number of incorrect localizations. The resulting algorithms were run with no human intervention at computational speeds faster than the data recording speed on over forty days of acoustic recordings from the range, spanning multiple years. Spatial localizations based on correlating sequences of units originating from within the range produce estimates having a standard deviation typically 10 m or less (due primarily to TDOA measurement errors), and a bias of 20 m or less (due primarily to sound speed mismatch). An automated method for associating units to individual whales is presented, enabling automated humpback song analyses to be performed.

  12. Influence of music on maximal self-paced running performance and passive post-exercise recovery rate.

    PubMed

    Lee, Sam; Kimmerly, Derek S

    2016-01-01

    The purpose of this study was to examine the influence of fast tempo music (FM) on self-paced running performance (heart rate, running speed, ratings of perceived exertion), and slow tempo music (SM) on post-exercise heart rate and blood lactate recovery rates. Twelve participants (5 women) completed three randomly assigned conditions: static noise (control), FM and SM. Each condition consisted of self-paced treadmill running, and supine postexercise recovery periods (20 min each). Average running speed, heart rate (HR) and ratings of perceived exertion (RPE) were measured during the treadmill running period, while HR and blood lactate were measured during the recovery period. Listening to FM during exercise resulted in a faster self-selected running speed (10.8±1.7 vs. 9.9±1.4 km•hour-1, P<0.001) and higher peak HR (184±12 vs. 177±17 beats•min-1, P<0.01) without a corresponding difference in peak RPE (FM, 16.8±1.8 vs. SM 15.7±1.9, P=0.10). Listening to SM during the post-exercise period resulted in faster HR recovery throughout (main effect P<0.001) and blood lactate at the end of recovery (2.8±0.4 vs. 4.7±0.8 mmol•L-1, P<0.05). Listening to FM during exercise can increase self-paced intensity without altering perceived exertion levels while listening to SM after exercise can accelerate the recovery rate back to resting levels.

  13. Dribbling determinants in sub-elite youth soccer players.

    PubMed

    Zago, Matteo; Piovan, Andrea Gianluca; Annoni, Isabella; Ciprandi, Daniela; Iaia, F Marcello; Sforza, Chiarella

    2016-01-01

    Dribbling speed in soccer is considered critical to the outcome of the game and can assist in the talent identification process. However, little is known about the biomechanics of this skill. By means of a motion capture system, we aimed to quantitatively investigate the determinants of effective dribbling skill in a group of 10 Under-13 sub-elite players, divided by the median-split technique according to their dribbling test time (faster and slower groups). Foot-ball contacts cadence, centre of mass (CoM), ranges of motion (RoM), velocity and acceleration, as well as stride length, cadence and variability were computed. Hip and knee joint RoMs were also considered. Faster players, as compared to slower players, showed a 30% higher foot-ball cadence (3.0 ± 0.1 vs. 2.3 ± 0.2 contacts · s(-1), P < 0.01); reduced CoM mediolateral (0.91 ± 0.05 vs. 1.14 ± 0.16 m, P < 0.05) and vertical (0.19 ± 0.01 vs. 0.25 ± 0.03 m, P < 0.05) RoMs; higher right stride cadence (+20%, P < 0.05) with lower variability (P < 0.05); reduced hip and knee flexion RoMs (P < 0.05). In conclusion, faster players are able to run with the ball through a shorter path in a more economical way. To effectively develop dribbling skill, coaches are encouraged to design specific practices where high stride frequency and narrow run trajectories are required.

  14. The Elimination of a Self-Injurious Avoidance Response through a Forced Running Consequence.

    ERIC Educational Resources Information Center

    Borreson, Paul M.

    1980-01-01

    The self-injurious avoidance responses of a 22-year-old severely mentally retarded male were eliminated through a forced running consequence. Side effects, such as reduced noise, increase in smiling, and faster progress toward instructional objectives, were also noted. The results were maintained over a period of two years. (Author/PHR)

  15. Development of Equipment for Use in Sport

    ERIC Educational Resources Information Center

    James, David

    2012-01-01

    No one has ever been able to create a running shoe that can make one run faster, but in other sports the design of equipment has the potential to offer considerable enhancement. Judgement has to be made as to whether such advantage becomes unfair. This article indicates many possible sports in which the equipment plays an important part in the…

  16. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    PubMed

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  17. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  18. RETURN TO RUNNING FOLLOWING A KNEE DISARTICULATION AMPUTATION: A CASE REPORT

    PubMed Central

    Diebal-Lee, Angela R.; Kuenzi, Robert S.; Rábago, Christopher A.

    2017-01-01

    Background and Purpose The evolution of running-specific prostheses has empowered athletes with lower extremity amputations to run farther and faster than previously thought possible; but running with proper mechanics is still paramount to an injury-free, active lifestyle. The purpose of this case report was to describe the successful alteration of intact limb mechanics from a Rearfoot Striking (RFS) to a Non-Rearfoot Striking (NRFS) pattern in an individual with a knee disarticulation amputation, the associated reduction in Average Vertical Loading Rate (AVLR), and the improvement in functional performance following the intervention. Case description A 30 year-old male with a traumatic right knee disarticulation amputation reported complaints of residual limb pain with running distances greater than 5 km, limiting his ability to train toward his goal of participating in triathlons. Qualitative assessment of his running mechanics revealed a RFS pattern with his intact limb and a NRFS pattern with his prosthetic limb. A full body kinematic and kinetic running analysis using 3D motion capture and force plates was performed. The average intact limb loading rate was four-times greater (112 body weights/s) than in his prosthetic limb which predisposed him to possible injury. He underwent a three week running intervention with a certified running specialist to learn a NRFS pattern with his intact limb. Outcomes Immediately following the running intervention, he was able to run distances of over 10 km without pain. On a two-mile fitness test, he decreased his run time from 19:54 min to 15:14 min. Additionally, the intact limb loading rate was dramatically reduced to 27 body weights/s, nearly identical to the prosthetic limb (24 body weights/s). Discussion This case report outlines a detailed return to run program that targets proprioceptive and neuromuscular components, injury prevention, and specificity of training strategies. The outcomes of this case report are promising as they may spur additional research toward understanding how to eliminate potential injury risk factors associated with running after limb loss. Level of Evidence 4 PMID:28900572

  19. An Anticipatory Model of Cavitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B., Jr.; Hylton, J.O.

    1999-04-05

    The Anticipatory System (AS) formalism developed by Robert Rosen provides some insight into the problem of embedding intelligent behavior in machines. AS emulates the anticipatory behavior of biological systems. AS bases its behavior on its expectations about the near future and those expectations are modified as the system gains experience. The expectation is based on an internal model that is drawn from an appeal to physical reality. To be adaptive, the model must be able to update itself. To be practical, the model must run faster than real-time. The need for a physical model and the requirement that the modelmore » execute at extreme speeds, has held back the application of AS to practical problems. Two recent advances make it possible to consider the use of AS for practical intelligent sensors. First, advances in transducer technology make it possible to obtain previously unavailable data from which a model can be derived. For example, acoustic emissions (AE) can be fed into a Bayesian system identifier that enables the separation of a weak characterizing signal, such as the signature of pump cavitation precursors, from a strong masking signal, such as a pump vibration feature. The second advance is the development of extremely fast, but inexpensive, digital signal processing hardware on which it is possible to run an adaptive Bayesian-derived model faster than real-time. This paper reports the investigation of an AS using a model of cavitation based on hydrodynamic principles and Bayesian analysis of data from high-performance AE sensors.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  1. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  2. Setting Standards for Medically-Based Running Analysis

    PubMed Central

    Vincent, Heather K.; Herman, Daniel C.; Lear-Barnes, Leslie; Barnes, Robert; Chen, Cong; Greenberg, Scott; Vincent, Kevin R.

    2015-01-01

    Setting standards for medically based running analyses is necessary to ensure that runners receive a high-quality service from practitioners. Medical and training history, physical and functional tests, and motion analysis of running at self-selected and faster speeds are key features of a comprehensive analysis. Self-reported history and movement symmetry are critical factors that require follow-up therapy or long-term management. Pain or injury is typically the result of a functional deficit above or below the site along the kinematic chain. PMID:25014394

  3. Sex differences in elite swimming with advanced age are less than marathon running.

    PubMed

    Senefeld, J; Joyner, M J; Stevens, A; Hunter, S K

    2016-01-01

    The sex difference in marathon performance increases with finishing place and age of the runner but whether this occurs among swimmers is unknown. The purpose was to compare sex differences in swimming velocity across world record place (1st-10th), age group (25-89 years), and event distance. We also compared sex differences between freestyle swimming and marathon running. The world's top 10 swimming times of both sexes for World Championship freestyle stroke, backstroke, breaststroke, and butterfly events and the world's top 10 marathon times in 5-year age groups were obtained. Men were faster than women for freestyle (12.4 ± 4.2%), backstroke (12.8 ± 3.0%), and breaststroke (14.5 ± 3.2%), with the greatest sex differences for butterfly (16.7 ± 5.5%). The sex difference in swimming velocity increased across world record place for freestyle (P < 0.001), breaststroke, and butterfly for all age groups and distances (P < 0.001) because of a greater relative drop-off between first and 10th place for women. The sex difference in marathon running increased with the world record place and the sex difference for marathon running was greater than for swimming (P < 0.001). The sex difference in swimming increased with world record place and age, but was less than for marathon running. Collectively, these results suggest more depth in women's swimming than marathon running. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Does core strength training influence running kinetics, lower-extremity stability, and 5000-M performance in runners?

    PubMed

    Sato, Kimitake; Mokha, Monique

    2009-01-01

    Although strong core muscles are believed to help athletic performance, few scientific studies have been conducted to identify the effectiveness of core strength training (CST) on improving athletic performance. The aim of this study was to determine the effects of 6 weeks of CST on ground reaction forces (GRFs), stability of the lower extremity, and overall running performance in recreational and competitive runners. After a screening process, 28 healthy adults (age, 36.9 +/- 9.4 years; height, 168.4 +/- 9.6 cm; mass, 70.1 +/- 15.3 kg) volunteered and were divided randomly into 2 groups (n = 14 in each group). A test-retest design was used to assess the differences between CST (experimental) and no CST (control) on GRF measures, lower-extremity stability scores, and running performance. The GRF variables were determined by calculating peak impact, active vertical GRFs (vGRFs), and duration of the 2 horizontal GRFs (hGRFs), as measured while running across a force plate. Lower-extremity stability was assessed using the Star Excursion Balance Test. Running performance was determined by 5000-m run time measured on outdoor tracks. Six 2 (pre, post) x 2 (CST, control) mixed-design analyses of variance were used to determine the influence of CST on each dependent variable, p < 0.05. Twenty subjects completed the study (nexp = 12 and ncon = 8). A significant interaction occurred, with the CST group showing faster times in the 5000-m run after 6 weeks. However, CST did not significantly influence GRF variables and lower-leg stability. Core strength training may be an effective training method for improving performance in runners.

  5. Running a marathon induces changes in adipokine levels and in markers of cartilage degradation--novel role for resistin.

    PubMed

    Vuolteenaho, Katriina; Leppänen, Tiina; Kekkonen, Riina; Korpela, Riitta; Moilanen, Eeva

    2014-01-01

    Running a marathon causes strenuous joint loading and increased energy expenditure. Adipokines regulate energy metabolism, but recent studies have indicated that they also exert a role in cartilage degradation in arthritis. Our aim was to investigate the effects of running a marathon on the levels of adipokines and indices of cartilage metabolism. Blood samples were obtained from 46 male marathoners before and after a marathon run. We measured levels of matrix metalloproteinase-3 (MMP-3), cartilage oligomeric protein (COMP) and chitinase 3-like protein 1 (YKL-40) as biomarkers of cartilage turnover and/or damage and plasma concentrations of adipokines adiponectin, leptin and resistin. Mean marathon time was 3:30:46±0:02:46 (h:min:sec). The exertion more than doubled MMP-3 levels and this change correlated negatively with the marathon time (r = -0.448, p = 0.002). YKL-40 levels increased by 56% and the effect on COMP release was variable. Running a marathon increased the levels of resistin and adiponectin, while leptin levels remained unchanged. The marathon-induced changes in resistin levels were positively associated with the changes in MMP-3 (r = 0.382, p = 0.009) and YKL-40 (r = 0.588, p<0.001) and the pre-marathon resistin levels correlated positively with the marathon induced change in YKL-40 (r = 0.386, p = 0.008). The present results show the impact of running a marathon, and possible load frequency, on cartilage metabolism: the faster the marathon was run, the greater was the increase in MMP-3 levels. Further, the results introduce pro-inflammatory adipocytokine resistin as a novel factor, which enhances during marathon race and associates with markers of cartilage degradation.

  6. Utah CTE: Running in New Circles

    ERIC Educational Resources Information Center

    Dobson, Kristine; Fischio, Shannon; Thomas, Susan

    2011-01-01

    Although the authors admit that they do not have any fool-proof formulas to offer for using Web site, blog, Facebook, Twitter, or YouTube in order to more successfully share one's career and technical education (CTE) story, they share a story of their own journey and hope that it may help people to run faster and more effectively in these new…

  7. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  8. High-throughput sequence alignment using Graphics Processing Units

    PubMed Central

    Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh

    2007-01-01

    Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356

  9. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  10. Real-time object detection and semantic segmentation for autonomous driving

    NASA Astrophysics Data System (ADS)

    Li, Baojun; Liu, Shun; Xu, Weichao; Qiu, Wei

    2018-02-01

    In this paper, we proposed a Highly Coupled Network (HCNet) for joint objection detection and semantic segmentation. It follows that our method is faster and performs better than the previous approaches whose decoder networks of different tasks are independent. Besides, we present multi-scale loss architecture to learn better representation for different scale objects, but without extra time in the inference phase. Experiment results show that our method achieves state-of-the-art results on the KITTI datasets. Moreover, it can run at 35 FPS on a GPU and thus is a practical solution to object detection and semantic segmentation for autonomous driving.

  11. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  12. The effects of load carriage on joint work at different running velocities.

    PubMed

    Liew, Bernard X W; Morris, Susan; Netto, Kevin

    2016-10-03

    Running with load carriage has become increasingly prevalent in sport, as well as many field-based occupations. However, the "sources" of mechanical work during load carriage running are not yet completely understood. The purpose of this study was to determine the influence of load magnitudes on the mechanical joint work during running, across different velocities. Thirty-one participants performed overground running at three load magnitudes (0%, 10%, 20% body weight), and at three velocities (3, 4, 5m/s). Three dimensional motion capture was performed, with synchronised force plate data captured. Inverse dynamics was used to quantify joint work in the stance phase of running. Joint work was normalized to a unit proportion of body weight and leg length (one dimensionless work unit=532.45J). Load significantly increased total joint work and total positive work and this effect was greater at faster velocities. Load carriage increased ankle positive work (β coefficient=rate of 6.95×10 -4 unit work per 1% BW carried), and knee positive (β=1.12×10 -3 unit) and negative work (β=-2.47×10 -4 unit), and hip negative work (β=-7.79×10 -4 unit). Load carriage reduced hip positive work and this effect was smaller at faster velocities. Inter-joint redistribution did not contribute significantly to altered mechanical work within the spectrum of load and velocity investigated. Hence, the ankle joint contributed to the greatest extent in work production, whilst that of the knee contributed to the greatest extent to work absorption when running with load. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  14. Muscular strategy shift in human running: dependence of running speed on hip and ankle muscle performance.

    PubMed

    Dorn, Tim W; Schache, Anthony G; Pandy, Marcus G

    2012-06-01

    Humans run faster by increasing a combination of stride length and stride frequency. In slow and medium-paced running, stride length is increased by exerting larger support forces during ground contact, whereas in fast running and sprinting, stride frequency is increased by swinging the legs more rapidly through the air. Many studies have investigated the mechanics of human running, yet little is known about how the individual leg muscles accelerate the joints and centre of mass during this task. The aim of this study was to describe and explain the synergistic actions of the individual leg muscles over a wide range of running speeds, from slow running to maximal sprinting. Experimental gait data from nine subjects were combined with a detailed computer model of the musculoskeletal system to determine the forces developed by the leg muscles at different running speeds. For speeds up to 7 m s(-1), the ankle plantarflexors, soleus and gastrocnemius, contributed most significantly to vertical support forces and hence increases in stride length. At speeds greater than 7 m s(-1), these muscles shortened at relatively high velocities and had less time to generate the forces needed for support. Thus, above 7 m s(-1), the strategy used to increase running speed shifted to the goal of increasing stride frequency. The hip muscles, primarily the iliopsoas, gluteus maximus and hamstrings, achieved this goal by accelerating the hip and knee joints more vigorously during swing. These findings provide insight into the strategies used by the leg muscles to maximise running performance and have implications for the design of athletic training programs.

  15. Fast neural network surrogates for very high dimensional physics-based models in computational oceanography.

    PubMed

    van der Merwe, Rudolph; Leen, Todd K; Lu, Zhengdong; Frolov, Sergey; Baptista, Antonio M

    2007-05-01

    We present neural network surrogates that provide extremely fast and accurate emulation of a large-scale circulation model for the coupled Columbia River, its estuary and near ocean regions. The circulation model has O(10(7)) degrees of freedom, is highly nonlinear and is driven by ocean, atmospheric and river influences at its boundaries. The surrogates provide accurate emulation of the full circulation code and run over 1000 times faster. Such fast dynamic surrogates will enable significant advances in ensemble forecasts in oceanography and weather.

  16. The influence of surface on the running velocities of elite and amateur orienteer athletes.

    PubMed

    Hébert-Losier, K; Jensen, K; Mourot, L; Holmberg, H-C

    2014-12-01

    We compared the reduction in running velocities from road to off-road terrain in eight elite and eight amateur male orienteer athletes to investigate whether this factor differentiates elite from amateur athletes. On two separate days, each subject ran three 2-km time trials and three 20-m sprints "all-out" on a road, on a path, and in a forest. On a third day, the running economy and maximal aerobic power of individuals were assessed on a treadmill. The elite orienteer ran faster than the amateur on all three surfaces and at both distances, in line with their better running economy and aerobic power. In the forest, the elites ran at a slightly higher percentage of their 2-km (∼3%) and 20-m (∼4%) road velocities. Although these differences did not exhibit traditional statistical significance, magnitude-based inferences suggested likely meaningful differences, particularly during 20-m sprinting. Of course, cognitive, mental, and physical attributes other than the ability to run on different surfaces are required for excellence in orienteering (e.g., a high aerobic power). However, we suggest that athlete-specific assessment of running performance on various surfaces and distances might assist in tailoring training and identifying individual strengths and/or weaknesses in an orienteer. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  18. Experiences with Cray multi-tasking

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1985-01-01

    The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.

  19. Aggregated channels network for real-time pedestrian detection

    NASA Astrophysics Data System (ADS)

    Ghorban, Farzin; Marín, Javier; Su, Yu; Colombo, Alessandro; Kummert, Anton

    2018-04-01

    Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware. In order to alleviate this drawback, most strategies focus on using a two-stage cascade approach. Essentially, in the first stage a fast method generates a significant but reduced amount of high quality proposals that later, in the second stage, are evaluated by the CNN. In this work, we propose a novel detection pipeline that further benefits from the two-stage cascade strategy. More concretely, the enriched and subsequently compressed features used in the first stage are reused as the CNN input. As a consequence, a simpler network architecture, adapted for such small input sizes, allows to achieve real-time performance and obtain results close to the state-of-the-art while running significantly faster without the use of GPU. In particular, considering that the proposed pipeline runs in frame rate, the achieved performance is highly competitive. We furthermore demonstrate that the proposed pipeline on itself can serve as an effective proposal generator.

  20. Effect of Maturation on Hemodynamic and Autonomic Control Recovery Following Maximal Running Exercise in Highly Trained Young Soccer Players

    PubMed Central

    Buchheit, Martin; Al Haddad, Hani; Mendez-Villanueva, Alberto; Quod, Marc J.; Bourdon, Pitre C.

    2011-01-01

    The purpose of this study was to examine the effect of maturation on post-exercise hemodynamic and autonomic responses. Fifty-five highly trained young male soccer players (12–18 years) classified as pre-, circum-, or post-peak height velocity (PHV) performed a graded running test to exhaustion on a treadmill. Before (Pre) and after (5th–10th min, Post) exercise, heart rate (HR), stroke volume (SV), cardiac output (CO), arterial pressure (AP), and total peripheral resistance (TPR) were monitored. Parasympathetic (high frequency [HFRR] of HR variability (HRV) and baroreflex sensitivity [Ln BRS]) and sympathetic activity (low frequency [LFSAP] of systolic AP variability) were estimated. Post-exercise blood lactate [La]b, the HR recovery (HRR) time constant, and parasympathetic reactivation (time-varying HRV analysis) were assessed. In all three groups, exercise resulted in increased HR, CO, AP, and LFSAP (P < 0.001), decreased SV, HFRR, and Ln BRS (all P < 0.001), and no change in TPR (P = 0.98). There was no “maturation × time” interaction for any of the hemodynamic or autonomic variables (all P > 0.22). After exercise, pre-PHV players displayed lower SV, CO, and [La]b, faster HRR and greater parasympathetic reactivation compared with circum- and post-PHV players. Multiple regression analysis showed that lean muscle mass, [La]b, and Pre parasympathetic activity were the strongest predictors of HRR (r2 = 0.62, P < 0.001). While pre-PHV players displayed a faster HRR and greater post-exercise parasympathetic reactivation, maturation had little influence on the hemodynamic and autonomic responses following maximal running exercise. HRR relates to lean muscle mass, blood acidosis, and intrinsic parasympathetic function, with less evident impact of post-exercise autonomic function. PMID:22013423

  1. Natural Language Interfaces to Database Systems

    DTIC Science & Technology

    1988-10-01

    the power was nff to avoid re-entering data for each run of the calculations. External physical devices were developed such as punched tape and...given rise to more powerful or faster tools. Today, operations with the latest fifth generation database management system are not going to be any faster...database does not represent an evolution of greater power or speed. The fascinating aspect is that it represents an evolution of usability and more

  2. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  3. VLSI research

    NASA Astrophysics Data System (ADS)

    Brodersen, R. W.

    1984-04-01

    A scaled version of the RISC II chip has been fabricated and tested and these new chips have a cycle time that would outperform a VAX 11/780 by about a factor of two on compiled integer C programs. The architectural work on a RISC chip designed for a Smalltalk implementation has been completed. This chip, called SOAR (Smalltalk On a RISC), should run program s4-15 times faster than the Xerox 1100 (Dolphin), a TTL minicomputer, and about as fast as the Xerox 1132 (Dorado), a $100,000 ECL minicomputer. The 1983 VLSI tools tape has been converted for use under the latest UNIX release (4.2). The Magic (formerly called Caddy) layout system will be a unified set of highly automated tools that cover all aspects of the layout process, including stretching, compaction, tiling and routing. A multiple window package and design rule checker for this system have just been completed and compaction and stretching are partially implemented. New slope-based timing models for the Crystal timing analyzer are now fully implemented and in regular use. In an accuracy test using a dozen critical paths from the RISC II processor and cache chips it was found that Crystal's estimates were within 5-10% of SPICE's estimates, while being a factor of 10,000 times faster.

  4. The Effects of Different Training Backgrounds on VO2 Responses to All-Out and Supramaximal Constant-Velocity Running Bouts

    PubMed Central

    de Aguiar, Rafael Alves; Lisbôa, Felipe Domingos; Turnes, Tiago; Cruz, Rogério Santos de Oliveira; Caputo, Fabrizio

    2015-01-01

    To investigate the impact of different training backgrounds on pulmonary oxygen uptake (V̇O2) responses during all-out and supramaximal constant-velocity running exercises, nine sprinters (SPRs) and eight endurance runners (ENDs) performed an incremental test for maximal aerobic velocity (MAV) assessment and two supramaximal running exercises (1-min all-out test and constant-velocity exercise). The V̇O2 responses were continuously determined during the tests (K4b2, Cosmed, Italy). A mono-exponential function was used to describe the V̇O2 onset kinetics during constant-velocity test at 110%MAV, while during 1-min all-out test the peak of V̇O2 (V̇O2peak), the time to achieve the V̇O2peak (tV̇O2peak) and the V̇O2 decrease at last of the test was determined to characterize the V̇O2 response. During constant-velocity exercise, ENDs had a faster V̇O2 kinetics than SPRs (12.7 ± 3.0 vs. 19.3 ± 5.6 s; p < 0.001). During the 1-min all-out test, ENDs presented slower tV̇O2peak than SPRs (40.6 ± 6.8 and 28.8 ± 6.4 s, respectively; p = 0.002) and had a similar V̇O2peak relative to the V̇O2max (88 ± 8 and 83 ± 6%, respectively; p = 0.157). Finally, SPRs was the only group that presented a V̇O2 decrease in the last half of the test (-1.8 ± 2.3 and 3.5 ± 2.3 ml.kg-1.min-1, respectively; p < 0.001). In summary, SPRs have a faster V̇O2 response when maximum intensity is required and a high maximum intensity during all-out running exercise seems to lead to a higher decrease in V̇O2 in the last part of the exercise. PMID:26252001

  5. Experimental Evidence for Fast Lithium Diffusion and Isotope Fractionation in Water-bearing Rhyolitic Melts at Magmatic Conditions

    NASA Astrophysics Data System (ADS)

    Cichy, S. B.; Till, C. B.; Roggensack, K.; Hervig, R. L.; Clarke, A. B.

    2015-12-01

    The aim of this work is to extend the existing database of experimentally-determined lithium diffusion coefficients to more natural cases of water-bearing melts at the pressure-temperature range of the upper crust. In particular, we are investigating Li intra-melt and melt-vapor diffusion and Li isotope fractionation, which have the potential to record short-lived magmatic processes (seconds to hours) in the shallow crust, especially during decompression-induced magma degassing. Hydrated intra-melt Li diffusion-couple experiments on Los Posos rhyolite glass [1] were performed in a piston cylinder at 300 MPa and 1050 °C. The polished interfaces between the diffusion couples were marked by addition of Pt powder for post-run detection. Secondary ion mass spectrometry analyses indicate that lithium diffuses extremely fast in the presence of water. Re-equilibration of a hydrated ~2.5 mm long diffusion-couple experiment was observed during the heating period from room temperature to the final temperature of 1050 °C at a rate of ~32 °C/min. Fractionation of ~40‰ δ7Li was also detected in this zero-time experiment. The 0.5h and 3h runs show progressively higher degrees of re-equilibration, while the isotope fractionation becomes imperceptible. Li contamination was observed in some experiments when flakes filed off Pt tubing were used to mark the diffusion couple boundary, while the use of high purity Pt powder produced better results and allowed easier detection of the diffusion-couple boundary. The preliminary lithium isotope fractionation results (δ7Li vs. distance) support findings from [2] that 6Li diffuses substantially faster than 7Li. Further experimental sets are in progress, including lower run temperatures (e.g. 900 °C), faster heating procedure (~100 °C/min), shorter run durations and the extension to mafic systems. [1] Stanton (1990) Ph.D. thesis, Arizona State Univ., [2] Richter et al. (2003) GCA 67, 3905-3923.

  6. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    NASA Astrophysics Data System (ADS)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  7. ParDRe: faster parallel duplicated reads removal tool for sequencing studies.

    PubMed

    González-Domínguez, Jorge; Schmidt, Bertil

    2016-05-15

    Current next generation sequencing technologies often generate duplicated or near-duplicated reads that (depending on the application scenario) do not provide any interesting biological information but can increase memory requirements and computational time of downstream analysis. In this work we present ParDRe, a de novo parallel tool to remove duplicated and near-duplicated reads through the clustering of Single-End or Paired-End sequences from fasta or fastq files. It uses a novel bitwise approach to compare the suffixes of DNA strings and employs hybrid MPI/multithreading to reduce runtime on multicore systems. We show that ParDRe is up to 27.29 times faster than Fulcrum (a representative state-of-the-art tool) on a platform with two 8-core Sandy-Bridge processors. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at https://sourceforge.net/projects/pardre/ jgonzalezd@udc.es. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Do Running Kinematic Characteristics Change over a Typical HIIT for Endurance Runners?

    PubMed

    García-Pinillos, Felipe; Soto-Hermoso, Víctor M; Latorre-Román, Pedro Á

    2016-10-01

    García-Pinillos, F, Soto-Hermoso, VM, and Latorre-Román, PÁ. Do running kinematic characteristics change over a typical HIIT for endurance runners?. J Strength Cond Res 30(10): 2907-2917, 2016-The purpose of this study was to describe kinematic changes that occur during a common high-intensity intermittent training (HIIT) session for endurance runners. Twenty-eight male endurance runners participated in this study. A high-speed camera was used to measure sagittal-plane kinematics at the first and the last run during a HIIT (4 × 3 × 400 m). The dependent variables were spatial-temporal variables, joint angles during support and swing, and foot strike pattern. Physiological variables, rate of perceived exertion, and athletic performance were also recorded. No significant changes (p ≥ 0.05) in kinematic variables were found during the HIIT session. Two cluster analyses were performed, according to the average running pace-faster vs. slower, and according to exhaustion level reached-exhausted group vs. nonexhausted group (NEG). At first run, no significant differences were found between groups. As for the changes induced by the running protocol, significant differences (p ≤ 0.05) were found between faster and slower athletes at toe-off in θhip and θknee, whereas some changes were found in NEG in θhip during toe-off (+4.3°) and θknee at toe-off (-5.2°) during swing. The results show that a common HIIT session for endurance runners did not consistently or substantially perturb the running kinematics of trained male runners. Additionally, although some differences between groups have been found, neither athletic performance nor exhaustion level reached seems to be determinant in the kinematic response during a HIIT, at least for this group of moderately trained endurance runners.

  9. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  10. Risk perception influences athletic pacing strategy.

    PubMed

    Micklewright, Dominic; Parry, David; Robinson, Tracy; Deacon, Greg; Renfree, Andrew; St Clair Gibson, Alan; Matthews, William J

    2015-05-01

    The objective of this study is to examine risk taking and risk perception associations with perceived exertion, pacing, and performance in athletes. Two experiments were conducted in which risk perception was assessed using the domain-specific risk taking (DOSPERT) scale in 20 novice cyclists (experiment 1) and 32 experienced ultramarathon runners (experiment 2). In experiment 1, participants predicted their pace and then performed a 5-km maximum effort cycling time trial on a calibrated Kingcycle mounted bicycle. Split times and perceived exertion were recorded every kilometer. In experiment 2, each participant predicted their split times before running a 100-km ultramarathon. Split times and perceived exertion were recorded at seven checkpoints. In both experiments, higher and lower risk perception groups were created using median split of DOSPERT scores. In experiment 1, pace during the first kilometer was faster among lower risk perceivers compared with higher risk perceivers (t(18) = 2.0, P = 0.03) and faster among higher risk takers compared with lower risk takers (t(18) = 2.2, P = 0.02). Actual pace was slower than predicted pace during the first kilometer in both the higher risk perceivers (t(9) = -4.2, P = 0.001) and lower risk perceivers (t(9) = -1.8, P = 0.049). In experiment 2, pace during the first 36 km was faster among lower risk perceivers compared with higher risk perceivers (t(16) = 2.0, P = 0.03). Irrespective of risk perception group, actual pace was slower than predicted pace during the first 18 km (t(16) = 8.9, P < 0.001) and from 18 to 36 km (t(16) = 4.0, P < 0.001). In both experiments, there was no difference in performance between higher and lower risk perception groups. Initial pace is associated with an individual's perception of risk, with low perceptions of risk being associated with a faster starting pace. Large differences between predicted and actual pace suggest that the performance template lacks accuracy, perhaps indicating greater reliance on momentary pacing decisions rather than preplanned strategy.

  11. A rapid estimation of near field tsunami run-up

    USGS Publications Warehouse

    Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie

    2015-01-01

    Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources.

  12. Time Evolution of Sublingual Microcirculatory Changes in Recreational Marathon Runners

    PubMed Central

    Arstikyte, Justina; Vaitkaitiene, Egle; Vaitkaitis, Dinas

    2017-01-01

    We aimed to evaluate changes in sublingual microcirculation induced by a marathon race. Thirteen healthy male controls and 13 male marathon runners volunteered for the study. We performed sublingual microcirculation, using a Cytocam-IDF device (Braedius Medical, Huizen, Netherlands), and systemic hemodynamic measurements four times: 24 hours prior to their participation in the Kaunas Marathon (distance: 41.2 km), directly after finishing the marathon, 24 hours after the marathon, and one week after the marathon. The marathon runners exhibited a higher functional capillary density (FCD) and total vascular density of small vessels at the first visit compared with the controls. Overall, we did not find any changes in sublingual microcirculation of the marathon runners at any of the other visits. However, in a subgroup of marathon runners with a decreased FCD compared to the subgroup with increased FCD, the subgroup with decreased FCD had shorter running time (190.37 ± 30.2 versus 221.80 ± 23.4 min, p = 0.045), ingested less fluids (907 ± 615 versus 1950 ± 488 mL, p = 0.007) during the race, and lost much more weight (−2.4 ± 1.3 versus −1.0 ± 0.8 kg, p = 0.041). Recreational marathon running is not associated with an alteration of sublingual microcirculation. However, faster running and dehydration may be crucial for further impairing microcirculation. PMID:28828386

  13. The Effect of Driver Rise-Time on Pinch Current and its Impact on Plasma Focus Performance and Neutron Yield

    NASA Astrophysics Data System (ADS)

    Sears, Jason; Schmidt, Andrea; Link, Anthony; Welch, Dale

    2016-10-01

    Experiments have suggested that dense plasma focus (DPF) neutron yield increases with faster drivers [Decker NIMP 1986]. Using the particle-in-cell code LSP [Schmidt PRL 2012], we reproduce this trend in a kJ DPF [Ellsworth 2014], and demonstrate how driver rise time is coupled to neutron output. We implement a 2-D model of the plasma focus including self-consistent circuit-driven boundary conditions. Driver capacitance and voltage are varied to modify the current rise time, and anode length is adjusted so that run-in coincides with the peak current. We observe during run down that magnetohydrodynamic (MHD) instabilities of the sheath shed blobs of plasma that remain in the inter-electrode gap during run in. This trailing plasma later acts as a low-inductance restrike path that shunts current from the pinch during maximum compression. While the MHD growth rate increases slightly with driver speed, the shorter anode of the fast driver allows fewer e-foldings and hence reduces the trailing mass between electrodes. As a result, the fast driver postpones parasitic restrikes and maintains peak current through the pinch during maximum compression. The fast driver pinch therefore achieves best simultaneity between its ion beam and peak target density, which maximizes neutron production. Prepared by LLNL under Contract DE-AC52-07NA27344.

  14. Effect of contrast water therapy duration on recovery of running performance.

    PubMed

    Versey, Nathan G; Halson, Shona L; Dawson, Brian T

    2012-06-01

    To investigate whether contrast water therapy (CWT) assists acute recovery from high-intensity running and whether a dose-response relationship exists. Ten trained male runners completed 4 trials, each commencing with a 3000-m time trial, followed by 8 × 400-m intervals with 1 min of recovery. Ten minutes postexercise, participants performed 1 of 4 recovery protocols: CWT, by alternating 1 min hot (38°C) and 1 min cold (15°C) for 6 (CWT6), 12 (CWT12), or 18 min (CWT18), or a seated rest control trial. The 3000-m time trial was repeated 2 h later. 3000-m performance slowed from 632 ± 4 to 647 ± 4 s in control, 631 ± 4 to 642 ± 4 s in CWT6, 633 ± 4 to 648 ± 4 s in CWT12, and 631 ± 4 to 647 ± 4 s in CWT18. Following CWT6, performance (smallest worthwhile change of 0.3%) was substantially faster than control (87% probability, 0.8 ± 0.8% mean ± 90% confidence limit), however, there was no effect for CWT12 (34%, 0.0 ± 1.0%) or CWT18 (34%, -0.1 ± 0.8%). There were no substantial differences between conditions in exercise heart rates, or postexercise calf and thigh girths. Algometer thigh pain threshold during CWT12 was higher at all time points compared with control. Subjective measures of thermal sensation and muscle soreness were lower in all CWT conditions at some post-water-immersion time points compared with control; however, there were no consistent differences in whole body fatigue following CWT. Contrast water therapy for 6 min assisted acute recovery from high-intensity running; however, CWT duration did not have a dose-response effect on recovery of running performance.

  15. Effect of the Protein Corona on Antibody-Antigen Binding in Nanoparticle Sandwich Immunoassays.

    PubMed

    de Puig, Helena; Bosch, Irene; Carré-Camps, Marc; Hamad-Schifferli, Kimberly

    2017-01-18

    We investigated the effect of the protein corona on the function of nanoparticle (NP) antibody (Ab) conjugates in dipstick sandwich immunoassays. Ab specific for Zika virus nonstructural protein 1 (NS1) were conjugated to gold NPs, and another anti-NS1 Ab was immobilized onto the nitrocellulose membrane. Sandwich immunoassay formation was influenced by whether the strip was run in corona forming conditions, i.e., in human serum. Strips run in buffer or pure solutions of bovine serum albumin exhibited false positives, but those run in human serum did not. Serum pretreatment of the nitrocellulose also eliminated false positives. Corona formation around the NP-Ab in serum was faster than the immunoassay time scale. Langmuir binding analysis determined how the immobilized Ab affinity for the NP-Ab/NS1 was impacted by corona formation conditions, quantified as an effective dissociation constant, K D eff . Results show that corona formation mediates the specificity and sensitivity of the antibody-antigen interaction of Zika biomarkers in immunoassays, and plays a critical but beneficial role.

  16. User interface user's guide for HYPGEN

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1992-01-01

    The user interface (UI) of HYPGEN is developed using Panel Library to shorten the learning curve for new users and provide easier ways to run HYPGEN for casual users as well as for advanced users. Menus, buttons, sliders, and type-in fields are used extensively in UI to allow users to point and click with a mouse to choose various available options or to change values of parameters. On-line help is provided to give users information on using UI without consulting the manual. Default values are set for most parameters and boundary conditions are determined by UI to further reduce the effort needed to run HYPGEN; however, users are free to make any changes and save it in a file for later use. A hook to PLOT3D is built in to allow graphics manipulation. The viewpoint and min/max box for PLOT3D windows are computed by UI and saved in a PLOT3D journal file. For large grids which take a long time to generate on workstations, the grid generator (HYPGEN) can be run on faster computers such as Crays, while UI stays at the workstation.

  17. An effective hybrid firefly algorithm with harmony search for global numerical optimization.

    PubMed

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.

  18. Directing Attention Externally Enhances Agility Performance: A Qualitative and Quantitative Analysis of the Efficacy of Using Verbal Instructions to Focus Attention

    PubMed Central

    Porter, Jared M.; Nolan, Russell P.; Ostrowski, Erik J.; Wulf, Gabriele

    2010-01-01

    The primary purpose of this study was to investigate if focusing attention externally produced faster movement times compared to instructions that focused attention internally or a control set of instructions that did not explicitly focus attention when performing an agility task. A second purpose of the study was to measure participants’ focus of attention during practice by use of a questionnaire. Participants (N = 20) completed 15 trials of an agility “L” run following instructions designed to induce an external (EXT), internal (INT) attentional focus or a control (CON) set of instructions inducing no specific focus of attention. Analysis revealed when participants followed the EXT instructions they had significantly faster movement times compared to when they followed the INT and CON set of instructions; consistent with previous research the INT and CON movement times were not significantly different from each other. Qualitative data showed when participants were in the external condition they focused externally 67% of the time. When they were in the internal condition they focused internally 76% of the time, and when they were in the control condition they did not use an internal or external focus of attention 77% of the time. Qualitative data also revealed participants in the EXT, INT, and CON conditions switched their focus of attention at a frequency of 27, 35, and 51% respectively. PMID:21833271

  19. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  20. Running a Marathon Induces Changes in Adipokine Levels and in Markers of Cartilage Degradation – Novel Role for Resistin

    PubMed Central

    Vuolteenaho, Katriina; Leppänen, Tiina; Kekkonen, Riina; Korpela, Riitta; Moilanen, Eeva

    2014-01-01

    Running a marathon causes strenuous joint loading and increased energy expenditure. Adipokines regulate energy metabolism, but recent studies have indicated that they also exert a role in cartilage degradation in arthritis. Our aim was to investigate the effects of running a marathon on the levels of adipokines and indices of cartilage metabolism. Blood samples were obtained from 46 male marathoners before and after a marathon run. We measured levels of matrix metalloproteinase-3 (MMP-3), cartilage oligomeric protein (COMP) and chitinase 3-like protein 1 (YKL-40) as biomarkers of cartilage turnover and/or damage and plasma concentrations of adipokines adiponectin, leptin and resistin. Mean marathon time was 3∶30∶46±0∶02∶46 (h:min:sec). The exertion more than doubled MMP-3 levels and this change correlated negatively with the marathon time (r = –0.448, p = 0.002). YKL-40 levels increased by 56% and the effect on COMP release was variable. Running a marathon increased the levels of resistin and adiponectin, while leptin levels remained unchanged. The marathon-induced changes in resistin levels were positively associated with the changes in MMP-3 (r = 0.382, p = 0.009) and YKL-40 (r = 0.588, p<0.001) and the pre-marathon resistin levels correlated positively with the marathon induced change in YKL-40 (r = 0.386, p = 0.008). The present results show the impact of running a marathon, and possible load frequency, on cartilage metabolism: the faster the marathon was run, the greater was the increase in MMP-3 levels. Further, the results introduce pro-inflammatory adipocytokine resistin as a novel factor, which enhances during marathon race and associates with markers of cartilage degradation. PMID:25333960

  1. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  2. The Influence of Mid-Event Deception on Psychophysiological Status and Pacing Can Persist across Consecutive Disciplines and Enhance Self-paced Multi-modal Endurance Performance

    PubMed Central

    Taylor, Daniel; Smith, Mark F.

    2017-01-01

    Purpose: To examine the effects of deceptively aggressive bike pacing on performance, pacing, and associated physiological and perceptual responses during simulated sprint-distance triathlon. Methods: Ten non-elite, competitive male triathletes completed three simulated sprint-distance triathlons (0.75 km swim, 500 kJ bike, 5 km run), the first of which established personal best “baseline” performance (BL). During the remaining two trials athletes maintained a cycling power output 5% greater than BL, before completing the run as quickly as possible. However, participants were informed of this aggressive cycling strategy before and during only one of the two trials (HON). Prior to the alternate trial (DEC), participants were misinformed that cycling power output would equal that of BL, with on-screen feedback manipulated to reinforce this deception. Results: Compared to BL, a significantly faster run performance was observed following DEC cycling (p < 0.05) but not following HON cycling (1348 ± 140 vs. 1333 ± 129 s and 1350 ± 135 s, for BL, DEC, and HON, respectively). As such, magnitude-based inferences suggest HON running was more likely to be slower, than faster, compared to BL, and that DEC running was probably faster than both BL and HON. Despite a trend for overall triathlon performance to be quicker during DEC (4339 ± 395 s) compared to HON (4356 ± 384 s), the only significant and almost certainly meaningful differences were between each of these trials and BL (4465 ± 420 s; p < 0.05). Generally, physiological and perceptual strain increased with higher cycling intensities, with little, if any, substantial difference in physiological and perceptual response during each triathlon run. Conclusions: The present study is the first to show that mid-event pace deception can have a practically meaningful effect on multi-modal endurance performance, though the relative importance of different psychophysiological and emotional responses remains unclear. Whilst our findings support the view that some form of anticipatory “template” may be used by athletes to interpret levels of psychophysiological and emotional strain, and regulate exercise intensity accordingly, they would also suggest that individual constructs such as RPE and affect may be more loosely tied with pacing than previously suggested. PMID:28174540

  3. Running faster causes disaster: trade-offs between speed, manoeuvrability and motor control when running around corners in northern quolls (Dasyurus hallucatus).

    PubMed

    Wynn, Melissa L; Clemente, Christofer; Nasir, Ami Fadhillah Amir Abdul; Wilson, Robbie S

    2015-02-01

    Movement speed is fundamental to all animal behaviour, yet no general framework exists for understanding why animals move at the speeds they do. Even during fitness-defining behaviours like running away from predators, an animal should select a speed that balances the benefits of high speed against the increased probability of mistakes. In this study, we explored this idea by quantifying trade-offs between speed, manoeuvrability and motor control in wild northern quolls (Dasyurus hallucatus) - a medium-sized carnivorous marsupial native to northern Australia. First, we quantified how running speed affected the probability of crashes when rounding corners of 45, 90 and 135 deg. We found that the faster an individual approached a turn, the higher the probability that they would crash, and these risks were greater when negotiating tighter turns. To avoid crashes, quolls modulated their running speed when they moved through turns of varying angles. Average speed for quolls when sprinting along a straight path was around 4.5 m s(-1) but this decreased linearly to speeds of around 1.5 m s(-1) when running through 135 deg turns. Finally, we explored how an individual's morphology affects their manoeuvrability. We found that individuals with larger relative foot sizes were more manoeuvrable than individuals with smaller relative foot sizes. Thus, movement speed, even during extreme situations like escaping predation, should be based on a compromise between high speed, manoeuvrability and motor control. We advocate that optimal - rather than maximal - performance capabilities underlie fitness-defining behaviours such as escaping predators and capturing prey. © 2015. Published by The Company of Biologists Ltd.

  4. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less

  5. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  6. Finding community structure in very large networks

    NASA Astrophysics Data System (ADS)

    Clauset, Aaron; Newman, M. E. J.; Moore, Cristopher

    2004-12-01

    The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(mdlogn) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with mtilde n and dtilde logn , in which case our algorithm runs in essentially linear time, O(nlog2n) . As an example of the application of this algorithm we use it to analyze a network of items for sale on the web site of a large on-line retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400 000 vertices and 2×106 edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.

  7. Adaptive mesh fluid simulations on GPU

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Abel, Tom; Kaehler, Ralf

    2010-10-01

    We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.

  8. Climbing favours the tripod gait over alternative faster insect gaits

    NASA Astrophysics Data System (ADS)

    Ramdya, Pavan; Thandiackal, Robin; Cherney, Raphael; Asselborn, Thibault; Benton, Richard; Ijspeert, Auke Jan; Floreano, Dario

    2017-02-01

    To escape danger or catch prey, running vertebrates rely on dynamic gaits with minimal ground contact. By contrast, most insects use a tripod gait that maintains at least three legs on the ground at any given time. One prevailing hypothesis for this difference in fast locomotor strategies is that tripod locomotion allows insects to rapidly navigate three-dimensional terrain. To test this, we computationally discovered fast locomotor gaits for a model based on Drosophila melanogaster. Indeed, the tripod gait emerges to the exclusion of many other possible gaits when optimizing fast upward climbing with leg adhesion. By contrast, novel two-legged bipod gaits are fastest on flat terrain without adhesion in the model and in a hexapod robot. Intriguingly, when adhesive leg structures in real Drosophila are covered, animals exhibit atypical bipod-like leg coordination. We propose that the requirement to climb vertical terrain may drive the prevalence of the tripod gait over faster alternative gaits with minimal ground contact.

  9. Climbing favours the tripod gait over alternative faster insect gaits

    PubMed Central

    Ramdya, Pavan; Thandiackal, Robin; Cherney, Raphael; Asselborn, Thibault; Benton, Richard; Ijspeert, Auke Jan; Floreano, Dario

    2017-01-01

    To escape danger or catch prey, running vertebrates rely on dynamic gaits with minimal ground contact. By contrast, most insects use a tripod gait that maintains at least three legs on the ground at any given time. One prevailing hypothesis for this difference in fast locomotor strategies is that tripod locomotion allows insects to rapidly navigate three-dimensional terrain. To test this, we computationally discovered fast locomotor gaits for a model based on Drosophila melanogaster. Indeed, the tripod gait emerges to the exclusion of many other possible gaits when optimizing fast upward climbing with leg adhesion. By contrast, novel two-legged bipod gaits are fastest on flat terrain without adhesion in the model and in a hexapod robot. Intriguingly, when adhesive leg structures in real Drosophila are covered, animals exhibit atypical bipod-like leg coordination. We propose that the requirement to climb vertical terrain may drive the prevalence of the tripod gait over faster alternative gaits with minimal ground contact. PMID:28211509

  10. A faster running speed is associated with a greater body weight loss in 100-km ultra-marathoners.

    PubMed

    Knechtle, Beat; Knechtle, Patrizia; Wirth, Andrea; Alexander Rüst, Christoph; Rosemann, Thomas

    2012-01-01

    In 219 recreational male runners, we investigated changes in body mass, total body water, haematocrit, plasma sodium concentration ([Na(+)]), and urine specific gravity as well as fluid intake during a 100-km ultra-marathon. The athletes lost 1.9 kg (s = 1.4) of body mass, equal to 2.5% (s = 1.8) of body mass (P < 0.001), 0.7 kg (s = 1.0) of predicted skeletal muscle mass (P < 0.001), 0.2 kg (s = 1.3) of predicted fat mass (P < 0.05), and 0.9 L (s = 1.6) of predicted total body water (P < 0.001). Haematocrit decreased (P < 0.001), urine specific gravity (P < 0.001), plasma volume (P < 0.05), and plasma [Na(+)] (P < 0.05) all increased. Change in body mass was related to running speed (r = -0.16, P < 0.05), change in plasma volume was associated with change in plasma [Na(+)] (r = -0.28, P < 0.0001), and change in body mass was related to both change in plasma [Na(+)] (r = -0.36) and change in plasma volume (r = 0.31) (P < 0.0001). The athletes consumed 0.65 L (s = 0.27) fluid per hour. Fluid intake was related to both running speed (r = 0.42, P < 0.0001) and change in body mass (r = 0.23, P = 0.0006), but not post-race plasma [Na(+)] or change in plasma [Na(+)] (P > 0.05). In conclusion, faster runners lost more body mass, runners lost more body mass when they drank less fluid, and faster runners drank more fluid than slower runners.

  11. Performance Evaluation of the Sysmex CS-5100 Automated Coagulation Analyzer.

    PubMed

    Chen, Liming; Chen, Yu

    2015-01-01

    Coagulation testing is widely applied clinically, and laboratories increasingly demand automated coagulation analyzers with short turn-around times and high-throughput. The purpose of this study was to evaluate the performance of the Sysmex CS-5100 automated coagulation analyzer for routine use in a clinical laboratory. The prothrombin time (PT), international normalized ratio (INR), activated partial thromboplastin time (APTT), fibrinogen (Fbg), and D-dimer were compared between the Sysmex CS-5100 and Sysmex CA-7000 analyzers, and the imprecision, comparison, throughput, STAT function, and performance for abnormal samples were measured in each. The within-run and between-run coefficients of variation (CV) for the PT, APTT, INR, and D-dimer analyses showed excellent results both in the normal and pathologic ranges. The correlation coefficients between the Sysmex CS-5100 and Sysmex CA-7000 were highly correlated. The throughput of the Sysmex CS-5100 was faster than that of the Sysmex CA-7000. There was no interference at all by total bilirubin concentrations and triglyceride concentrations in the Sysmex CS-5100 analyzer. We demonstrated that the Sysmex CS-5100 performs with satisfactory imprecision and is well suited for coagulation analysis in laboratories processing large sample numbers and icteric and lipemic samples.

  12. A three-stage experimental strategy to evaluate and validate an interplate IC50 format.

    PubMed

    Rodrigues, Daniel J; Lyons, Richard; Laflin, Philip; Pointon, Wayne; Kammonen, Juha

    2007-12-01

    The serial dilution of compounds to establish potency against target enzymes or receptors can at times be a rate-limiting step in project progression. We have investigated the possibility of running 50% inhibitory concentration experiments in an interplate format, with dose ranges constructed across plates. The advantages associated with this format include a faster reformatting time for the compounds while also increasing the number of doses that can be potentially generated. These two factors, in particular, would lend themselves to a higher-throughput and more timely testing of compounds, while also maximizing chances to capture fully developed dose-response curves. The key objective from this work was to establish a strategy to assess the feasibility of an interplate format to ensure that the quality of data generated would be equivalent to historical formats used. A three-stage approach was adopted to assess and validate running an assay in an interplate format, compared to an intraplate format. Although the three-stage strategy was tested with two different assay formats, it would be necessary to investigate the feasibility for other assay types. The recommendation is that the three-stage experimental strategy defined here is used to assess feasibility of other assay formats used.

  13. Using a virtual world for robot planning

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian

    2012-06-01

    We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.

  14. A Comparison of Mixed-Method Cooling Interventions on Preloaded Running Performance in the Heat.

    PubMed

    Stevens, Christopher J; Bennett, Kyle J M; Sculley, Dean V; Callister, Robin; Taylor, Lee; Dascombe, Ben J

    2017-03-01

    Stevens, CJ, Bennett, KJM, Sculley, DV, Callister, R, Taylor, L, and Dascombe, BJ. A comparison of mixed-method cooling interventions on preloaded running performance in the heat. J Strength Cond Res 31(3): 620-629, 2017-The purpose of this investigation was to assess the effect of combining practical methods to cool the body on endurance running performance and physiology in the heat. Eleven trained male runners completed 4 randomized, preloaded running time trials (20 minutes at 70% V[Combining Dot Above]O2max and a 3 km time trial) on a nonmotorized treadmill in the heat (33° C). Trials consisted of precooling by combined cold-water immersion and ice slurry ingestion (PRE), midcooling by combined facial water spray and menthol mouth rinse (MID), a combination of all methods (ALL), and control (CON). Performance time was significantly faster in MID (13.7 ± 1.2 minutes; p < 0.01) and ALL (13.7 ± 1.4 minutes; p = 0.04) but not PRE (13.9 ± 1.4 minutes; p = 0.24) when compared with CON (14.2 ± 1.2 minutes). Precooling significantly reduced rectal temperature (initially by 0.5 ± 0.2° C), mean skin temperature, heart rate and sweat rate, and increased iEMG activity, whereas midcooling significantly increased expired air volume and respiratory exchange ratio compared with control. Significant decreases in forehead temperature, thermal sensation, and postexercise blood prolactin concentration were observed in all conditions compared with control. Performance was improved with midcooling, whereas precooling had little or no influence. Midcooling may have improved performance through an attenuated inhibitory psychophysiological and endocrine response to the heat.

  15. Improving Running Times for the Determination of Fractional Snow-Covered Area from Landsat TM/ETM+ via Utilization of the CUDA® Programming Paradigm

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Rittger, K.; Painter, T. H.; Selkowitz, D.; Mattmann, C. A.; Ramirez, P.

    2014-12-01

    As part of a JPL-USGS collaboration to expand distribution of essential climate variables (ECV) to include on-demand fractional snow cover we describe our experience and implementation of a shift towards the use of NVIDIA's CUDA® parallel computing platform and programming model. In particular the on-demand aspect of this work involves the improvement (via faster processing and a reduction in overall running times) for determination of fractional snow-covered area (fSCA) from Landsat TM/ETM+. Our observations indicate that processing tasks associated with remote sensing including the Snow Covered Area and Grain Size Model (SCAG) when applied to MODIS or LANDSAT TM/ETM+ are computationally intensive processes. We believe the shift to the CUDA programming paradigm represents a significant improvement in the ability to more quickly assert the outcomes of such activities. We use the TMSCAG model as our subject to highlight this argument. We do this by describing how we can ingest a LANDSAT surface reflectance image (typically provided in HDF format), perform spectral mixture analysis to produce land cover fractions including snow, vegetation and rock/soil whilst greatly reducing running time for such tasks. Within the scope of this work we first document the original workflow used to assert fSCA for Landsat TM and it's primary shortcomings. We then introduce the logic and justification behind the switch to the CUDA paradigm for running single as well as batch jobs on the GPU in order to achieve parallel processing. Finally we share lessons learned from the implementation of myriad of existing algorithms to a single set of code in a single target language as well as benefits this ultimately provides scientists at the USGS.

  16. QERx- A Faster than Real-Time Emulator for Space Processors

    NASA Astrophysics Data System (ADS)

    Carvalho, B.; Pidgeon, A.; Robinson, P.

    2012-08-01

    Developing software for space systems is challenging. Especially because, in order to be sure it can cope with the harshness of the environment and the imperative requirements and constrains imposed by the platform were it will run, it needs to be tested exhaustively. Software Validation Facilities (SVF) are known to the industry and developers, and provide the means to run the On-Board Software (OBSW) in a realistic environment, allowing the development team to debug and test the software.But the challenge is to be able to keep up with the performance of the new processors (LEON2 and LEON3), which need to be emulated within the SVF. Such processor emulators are also used in Operational Simulators, used to support mission preparation and train mission operators. These simulators mimic the satellite and its behaviour, as realistically as possible. For test/operational efficiency reasons and because they will need to interact with external systems, both these uses cases require the processor emulators to provide real-time, or faster, performance.It is known to the industry that the performance of previously available emulators is not enough to cope with the performance of the new processors available in the market. SciSys approached this problem with dynamic translation technology trying to keep costs down by avoiding a hardware solution and keeping the integration flexibility of full software emulation.SciSys presented “QERx: A High Performance Emulator for Software Validation and Simulations” [1], in a previous DASIA event. Since then that idea has evolved and QERx has been successfully validated. SciSys is now presenting QERx as a product that can be tailored to fit different emulation needs. This paper will present QERx latest developments and current status.

  17. Fast and robust shape diameter function.

    PubMed

    Chen, Shuangmin; Liu, Taijun; Shu, Zhenyu; Xin, Shiqing; He, Ying; Tu, Changhe

    2018-01-01

    The shape diameter function (SDF) is a scalar function defined on a closed manifold surface, measuring the neighborhood diameter of the object at each point. Due to its pose oblivious property, SDF is widely used in shape analysis, segmentation and retrieval. However, computing SDF is computationally expensive since one has to place an inverted cone at each point and then average the penetration distances for a number of rays inside the cone. Furthermore, the shape diameters are highly sensitive to local geometric features as well as the normal vectors, hence diminishing their applications to real-world meshes which often contain rich geometric details and/or various types of defects, such as noise and gaps. In order to increase the robustness of SDF and promote it to a wide range of 3D models, we define SDF by offsetting the input object a little bit. This seemingly minor change brings three significant benefits: First, it allows us to compute SDF in a robust manner since the offset surface is able to give reliable normal vectors. Second, it runs many times faster since at each point we only need to compute the penetration distance along a single direction, rather than tens of directions. Third, our method does not require watertight surfaces as the input-it supports both point clouds and meshes with noise and gaps. Extensive experimental results show that the offset-surface based SDF is robust to noise and insensitive to geometric details, and it also runs about 10 times faster than the existing method. We also exhibit its usefulness using two typical applications including shape retrieval and shape segmentation, and observe a significant improvement over the existing SDF.

  18. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  19. Pedestrian crowd dynamics in merging sections: Revisiting the ;faster-is-slower; phenomenon

    NASA Astrophysics Data System (ADS)

    Shahhoseini, Zahra; Sarvi, Majid; Saberi, Meead

    2018-02-01

    The study of the discharge of active or self-driven matter in narrow passages has become of the growing interest in a variety of fields. The question has particularly important practical applications for the safety of pedestrian human flows notably in emergency scenarios. It has been suggested predominantly through simulation in some theoretical studies as well as through few experimentations that under certain circumstances, an elevated vigour to escape may exacerbate the outflow and cause further delay although the experimental evidence is rather mixed. The dimensions of this complex phenomenon known as the "faster-is slower" effect are of crucial importance to be understood owing to its potential practical implications for the emergency management. The contextual requirements of observing this phenomenon are yet to be identified. It is not clear whether a "do not speed up" policy is universally beneficial and advisable in an evacuation scenario. Here for the first time we experimentally examine this phenomenon in relation to the pedestrian flows at merging sections as a common geometric feature of crowd egress. Various merging angles and three different speed regimes were examined in high-density laboratory experiments. The measurements of flow interruptions and egress efficiency all indicated that the pedestrians were discharged faster when moving at elevated speed levels. We also observed clear dependencies between the discharge rate and the physical layout of the merging with certain designs clearly outperforming others. But regardless of the design, we observed faster throughput and greater avalanche sizes when we instructed pedestrians to run. Our results give the suggestion that observation of the faster-is-slower effect may necessitate certain critical conditions including passages being overly narrow relative to the size of participles (pedestrians) to create long-lasting blockages. The faster-is-slower assumption may not be universal and there may be circumstances where faster is, in fact, faster for evacuees. In the light of these findings, we suggest that it is important to identify and formulate those conditions so they can be disentangled from one another in the models. Misguided overgeneralisations may have unintended adverse ramifications for the safe evacuation management, and this highlights the need for further exploration of this phenomenon.

  20. Strength, speed and power characteristics of elite rugby league players.

    PubMed

    de Lacey, James; Brughelli, Matt E; McGuigan, Michael R; Hansen, Keir T

    2014-08-01

    The purpose of this article was to compare strength, speed, and power characteristics between playing position (forwards and backs) in elite rugby league players. A total of 39 first team players (height, 183.8 ± 5.95 cm; body mass, 100.3 ± 10.7 kg; age, 24 ± 3 years) from a National Rugby League club participated in this study. Testing included 10-, 40-m sprint times, sprint mechanics on an instrumented nonmotorized treadmill, and concentric isokinetic hip and knee extension and flexion. Backs, observed to have significantly (p ≤ 0.05) lighter body mass (effect size [ES] = 0.98), were significantly faster (10-m ES = 1.26; 40-m ES = 1.61) and produced significantly greater relative horizontal force and power (ES = 0.87 and 1.04) compared with forwards. However, no significant differences were found between forwards and backs during relative isokinetic knee extension, knee flexion, relative isokinetic hip extension, flexion, prowler sprints, sprint velocity, contact time, or flight time. The findings demonstrate that backs have similar relative strength in comparison with forwards, but run faster overground and produce significantly greater relative horizontal force and power when sprinting on a nonmotorized instrumented treadmill. Developing force and power in the horizontal direction may be beneficial for improving sprint performance in professional rugby league players.

  1. Officials nationwide give a green light to automated traffic enforcement

    DOT National Transportation Integrated Search

    2000-03-11

    There has been resistance to using cameras to automatically identify vehicles driven by motorists who run red lights and drive faster than the posted speed limits. Fairness, privacy, and "big brother" have been cited as reasons. The article examines ...

  2. Click trains and the rate of information processing: does "speeding up" subjective time make other psychological processes run faster?

    PubMed

    Jones, Luke A; Allely, Clare S; Wearden, John H

    2011-02-01

    A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.

  3. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  4. Implicit Learning of a Finger Motor Sequence by Patients with Cerebral Palsy After Neurofeedback.

    PubMed

    Alves-Pinto, Ana; Turova, Varvara; Blumenstein, Tobias; Hantuschke, Conny; Lampe, Renée

    2017-03-01

    Facilitation of implicit learning of a hand motor sequence after a single session of neurofeedback training of alpha power recorded from the motor cortex has been shown in healthy individuals (Ros et al., Biological Psychology 95:54-58, 2014). This facilitation effect could be potentially applied to improve the outcome of rehabilitation in patients with impaired hand motor function. In the current study a group of ten patients diagnosed with cerebral palsy trained reduction of alpha power derived from brain activity recorded from right and left motor areas. Training was distributed in three periods of 8 min each. In between, participants performed a serial reaction time task with their non-dominant hand, to a total of five runs. A similar procedure was repeated a week or more later but this time training was based on simulated brain activity. Reaction times pooled across participants decreased on each successive run faster after neurofeedback training than after the simulation training. Also recorded were two 3-min baseline conditions, once with the eyes open, another with the eyes closed, at the beginning and end of the experimental session. No significant changes in alpha power with neurofeedback or with simulation training were obtained and no correlation with the reductions in reaction time could be established. Contributions for this are discussed.

  5. Energy cost and lower leg muscle activities during erect bipedal locomotion under hyperoxia.

    PubMed

    Abe, Daijiro; Fukuoka, Yoshiyuki; Maeda, Takafumi; Horiuchi, Masahiro

    2018-06-19

    Energy cost of transport per unit distance (CoT) against speed shows U-shaped fashion in walking and linear fashion in running, indicating that there exists a specific walking speed minimizing the CoT, being defined as economical speed (ES). Another specific gait speed is the intersection speed between both fashions, being called energetically optimal transition speed (EOTS). We measured the ES, EOTS, and muscle activities during walking and running at the EOTS under hyperoxia (40% fraction of inspired oxygen) on the level and uphill gradients (+ 5%). Oxygen consumption [Formula: see text] and carbon dioxide output [Formula: see text] were measured to calculate the CoT values at eight walking speeds (2.4-7.3 km h -1 ) and four running speeds (7.3-9.4 km h - 1 ) in 17 young males. Electromyography was recorded from gastrocnemius medialis, gastrocnemius lateralis (GL), and tibialis anterior (TA) to evaluate muscle activities. Mean power frequency (MPF) was obtained to compare motor unit recruitment patterns between walking and running. [Formula: see text], [Formula: see text], and CoT values were lower under hyperoxia than normoxia at faster walking speeds and any running speeds. A faster ES on the uphill gradient and slower EOTS on both gradients were observed under hyperoxia than normoxia. GL and TA activities became lower when switching from walking to running at the EOTS under both FiO 2 conditions on both gradients, so did the MPF in the TA. ES and EOTS were influenced by reduced metabolic demands induced by hyperoxia. GL and TA activities in association with a lower shift of motor unit recruitment patterns in the TA would be related to the gait selection when walking or running at the EOTS. UMIN000017690 ( R000020501 ). Registered May 26, 2015, before the first trial.

  6. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment

    PubMed Central

    Manavski, Svetlin A; Valle, Giorgio

    2008-01-01

    Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198

  7. No Influence of Positive Emotion on Orbitofrontal Reality Filtering: Relevance for Confabulation

    PubMed Central

    Liverani, Maria Chiara; Manuel, Aurélie L.; Guggisberg, Adrian G.; Nahum, Louis; Schnider, Armin

    2016-01-01

    Orbitofrontal reality filtering (ORFi) is a mechanism that allows us to keep thought and behavior in phase with reality. Its failure induces reality confusion with confabulation and disorientation. Confabulations have been claimed to have a positive emotional bias, suggesting that they emanate from a tendency to embellish the situation of a handicap. Here we tested the influence of positive emotion on ORFi in healthy subjects using a paradigm validated in reality confusing patients and with a known electrophysiological signature, a frontal positivity at 200–300 ms after memory evocation. Subjects made two continuous recognition tasks (“two runs”), composed of the same set of neutral and positive pictures, but arranged in different order. In both runs, participants had to indicate picture repetitions within, and only within, the ongoing run. The first run measures learning and recognition. The second run, where all items are familiar, requires ORFi to avoid false positive responses. High-density evoked potentials were recorded from 19 healthy subjects during completion of the task. Performance was more accurate and faster on neutral than positive pictures in both runs and for all conditions. Evoked potential correlates of emotion and reality filtering occurred at 260–350 ms but dissociated in terms of amplitude and topography. In both runs, positive stimuli evoked a more negative frontal potential than neutral ones. In the second run, the frontal positivity characteristic of reality filtering was separately, and to the same degree, expressed for positive and neutral stimuli. We conclude that ORFi, the ability to place oneself correctly in time and space, is not influenced by emotional positivity of the processed material. PMID:27303276

  8. Influence of environmental temperature on duathlon performance.

    PubMed

    Sparks, S A; Cable, N T; Doran, D A; Maclaren, D P M

    The aim of this study was to evaluate the physiological, metabolic and performance responses to duathlon performance under a range of ambient temperatures. Ten male recreational athletes performed three self-paced duathlon time trials consisting of a 5 km run (R1), a 30 km cycle and a 5 km run (R2) at 10 degrees C, 20 degrees C and 30 degrees C and a relative humidity of 50%. Performance times, heart rate (HR), rating of perceived exertion (RPE), core temperature (Tc) and skin temperature (Tsk) were measured every kilometre. Carbohydrate and fat oxidation rates were calculated via expired gas analysis at the first and fourth kilometres during both running stages. Blood samples were taken before and after exercise for the determination of prolactin concentration.Overall performance was significantly faster at 10 degrees C (100.76+/-5.32 min) than at 30 degrees C (105.38 +/- 4.28 min). Significantly higher Tc was noted in the 30 degrees C trial than in the 10 degrees C trial, with concomitant elevations in prolactin after exercise (19.88 +/- 6.48 ng/ml at 30 degrees C; 13.10 +/- 8.75 ng/ml at 10 degrees C). The rates of carbohydrate oxidation did not differ between conditions, although fat oxidation rates were highest at 10 degrees C. Elevated ambient temperature has a negative effect on duathlon performance. This effect may be reflected in increased Tc and prolactin concentration.

  9. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    NASA Astrophysics Data System (ADS)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  10. Migratory Patterns of Wild Chinook Salmon Oncorhynchus tshawytscha Returning to a Large, Free-Flowing River Basin

    PubMed Central

    Eiler, John H.; Evans, Allison N.; Schreck, Carl B.

    2015-01-01

    Upriver movements were determined for Chinook salmon Oncorhynchus tshawytscha returning to the Yukon River, a large, virtually pristine river basin. These returns have declined dramatically since the late 1990s, and information is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio tagged during 2002–2004. Most (97.5%) of the fish tracked upriver to spawning areas displayed continual upriver movements and strong fidelity to the terminal tributaries entered. Movement rates were substantially slower for fish spawning in lower river tributaries (28–40 km d-1) compared to upper basin stocks (52–62 km d-1). Three distinct migratory patterns were observed, including a gradual decline, pronounced decline, and substantial increase in movement rate as the fish moved upriver. Stocks destined for the same region exhibited similar migratory patterns. Individual fish within a stock showed substantial variation, but tended to reflect the regional pattern. Differences between consistently faster and slower fish explained 74% of the within-stock variation, whereas relative shifts in sequential movement rates between “hares” (faster fish becoming slower) and “tortoises” (slow but steady fish) explained 22% of the variation. Pulses of fish moving upriver were not cohesive. Fish tagged over a 4-day period took 16 days to pass a site 872 km upriver. Movement rates were substantially faster and the percentage of atypical movements considerably less than reported in more southerly drainages, but may reflect the pristine conditions within the Yukon River, wild origins of the fish, and discrete run timing of the returns. Movement data can provide numerous insights into the status and management of salmon returns, particularly in large river drainages with widely scattered fisheries where management actions in the lower river potentially impact harvests and escapement farther upstream. However, the substantial variation exhibited among individual fish within a stock can complicate these efforts. PMID:25919286

  11. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  12. Migratory Patterns of Chinook Salmon Oncorhynchus tshawytscha Returning to a Large, Free-flowing River Basin

    USGS Publications Warehouse

    Eiler, John H.; Evans, Allison N.; Schreck, Carl B.

    2015-01-01

    Upriver movements were determined for Chinook salmon Oncorhynchus tshawytscha returning to the Yukon River, a large, virtually pristine river basin. These returns have declined dramatically since the late 1990s, and information is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio tagged during 2002–2004. Most (97.5%) of the fish tracked upriver to spawning areas displayed continual upriver movements and strong fidelity to the terminal tributaries entered. Movement rates were substantially slower for fish spawning in lower river tributaries (28–40 km d-1) compared to upper basin stocks (52–62 km d-1). Three distinct migratory patterns were observed, including a gradual decline, pronounced decline, and substantial increase in movement rate as the fish moved upriver. Stocks destined for the same region exhibited similar migratory patterns. Individual fish within a stock showed substantial variation, but tended to reflect the regional pattern. Differences between consistently faster and slower fish explained 74% of the within-stock variation, whereas relative shifts in sequential movement rates between “hares” (faster fish becoming slower) and “tortoises” (slow but steady fish) explained 22% of the variation. Pulses of fish moving upriver were not cohesive. Fish tagged over a 4-day period took 16 days to pass a site 872 km upriver. Movement rates were substantially faster and the percentage of atypical movements considerably less than reported in more southerly drainages, but may reflect the pristine conditions within the Yukon River, wild origins of the fish, and discrete run timing of the returns. Movement data can provide numerous insights into the status and management of salmon returns, particularly in large river drainages with widely scattered fisheries where management actions in the lower river potentially impact harvests and escapement farther upstream. However, the substantial variation exhibited among individual fish within a stock can complicate these efforts.

  13. Glucose-fructose likely improves gastrointestinal comfort and endurance running performance relative to glucose-only.

    PubMed

    Wilson, P B; Ingraham, S J

    2015-12-01

    This study aimed to determine whether glucose-fructose (GF) ingestion, relative to glucose-only, would alter performance, metabolism, gastrointestinal (GI) symptoms, and psychological affect during prolonged running. On two occasions, 20 runners (14 men) completed a 120-min submaximal run followed by a 4-mile time trial (TT). Participants consumed glucose-only (G) or GF (1.2:1 ratio) beverages, which supplied ∼ 1.3 g/min of carbohydrate. Substrate use, blood lactate, psychological affect [Feeling Scale (FS)], and GI distress were measured. Differences between conditions were assessed using magnitude-based inferential statistics. Participants completed the TT 1.9% (-1.9; -4.2, 0.4) faster with GF, representing a likely benefit. FS ratings were possibly higher and GI symptoms were possibly-to-likely lower with GF during the submaximal period and TT. Effect sizes for GI distress and FS ratings were relatively small (Cohen's d = ∼0.2 to 0.4). GF resulted in possibly higher fat oxidation during the submaximal period. No clear differences in lactate were observed. In conclusion, GF ingestion - compared with glucose-only - likely improves TT performance after 2 h of submaximal running, and GI distress and psychological affect are likely mechanisms. These results apply to runners consuming fluid at 500-600 mL/h and carbohydrate at 1.0-1.3 g/min during running at 60-70% VO2peak . © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Biomechanics and running economy.

    PubMed

    Anderson, T

    1996-08-01

    Running economy, which has traditionally been measured as the oxygen cost of running at a given velocity, has been accepted as the physiological criterion for 'efficient' performance and has been identified as a critical element of overall distance running performance. There is an intuitive link between running mechanics and energy cost of running, but research to date has not established a clear mechanical profile of an economic runner. It appears that through training, individuals are able to integrate and accommodate their own unique combination of dimensions and mechanical characteristics so that they arrive at a running motion which is most economical for them. Information in the literature suggests that biomechanical factors are likely to contribute to better economy in any runner. A variety of anthropometric dimensions could influence biomechanical effectiveness. These include: average or slightly smaller than average height for men and slightly greater than average height for women; high ponderal index and ectomorphic or ectomesomorphic physique; low percentage body fat; leg morphology which distributes mass closer to the hip joint; narrow pelvis and smaller than average feet. Gait patterns, kinematics and the kinetics of running may also be related to running economy. These factors include: stride length which is freely chosen over considerable running time; low vertical oscillation of body centre of mass; more acute knee angle during swing; less range of motion but greater angular velocity of plantar flexion during toe-off; arm motion of smaller amplitude; low peak ground reaction forces; faster rotation of shoulders in the transverse plane; greater angular excursion of the hips and shoulders about the polar axis in the transverse plane; and effective exploitation of stored elastic energy. Other factors which may improve running economy are: lightweight but well-cushioned shoes; more comprehensive training history; and the running surface of intermediate compliance. At the developmental level, this information might be useful in identifying athletes with favourable characteristics for economical distance running. At higher levels of competition, it is likely that 'natural selection' tends to eliminate athletes who failed to either inherit or develop characteristics which favour economy.

  15. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  16. Use of an antigravity treadmill for rehabilitation of a pelvic stress injury.

    PubMed

    Tenforde, Adam S; Watanabe, Laine M; Moreno, Tamara J; Fredericson, Michael

    2012-08-01

    Pelvic stress injuries are a relatively uncommon form of injury that require high index of clinician suspicion and usually MRI for definitive diagnosis. We present a case report of a 21-year-old female elite runner who was diagnosed with pelvic stress injury and used an antigravity treadmill during rehabilitation. She was able to return to pain-free ground running at 8 weeks after running at 95% body weight on the antigravity treadmill. Ten weeks from time of diagnosis, she competed at her conference championships and advanced to the NCAA Championships in the 10,000-meters. She competed in both races without residual pain. To our knowledge, this is the first published case report on use of an antigravity treadmill in rehabilitation of bone-related injuries. Our findings suggest that use of an antigravity treadmill for rehabilitation of a pelvic stress injury may result in appropriate bone loading and healing during progression to ground running and faster return to competition. Future research may identify appropriate protocols for recovery from overuse lower extremity injuries and other uses for this technology, including neuromuscular recovery and injury prevention. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  17. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  18. Locomotion energetics and gait characteristics of a rat-kangaroo, Bettongia penicillata, have some kangaroo-like features.

    PubMed

    Webster, K N; Dawson, T J

    2003-09-01

    The locomotory characteristics of kangaroos and wallabies are unusual, with both energetic costs and gait parameters differing from those of quadrupedal running mammals. The kangaroos and wallabies have an evolutionary history of only around 5 million years; their closest relatives, the rat-kangaroos, have a fossil record of more than 26 million years. We examined the locomotory characteristics of a rat-kangaroo, Bettongia penicillata. Locomotory energetics and gait parameters were obtained from animals exercising on a motorised treadmill at speeds from 0.6 m s(-1) to 6.2 m s(-1). Aerobic metabolic costs increased as hopping speed increased, but were significantly different from the costs for a running quadruped; at the fastest speed, the cost of hopping was 50% of the cost of running. Therefore B. penicillata can travel much faster than quadrupedal runners at similar levels of aerobic output. The maximum aerobic output of B. penicillata was 17 times its basal metabolism. Increases in speed during hopping were achieved through increases in stride length, with stride frequency remaining constant. We suggest that these unusual locomotory characteristics are a conservative feature among the hopping marsupials, with an evolutionary history of 20-30 million years.

  19. An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization

    PubMed Central

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137

  20. Breaking Megrelishvili protocol using matrix diagonalization

    NASA Astrophysics Data System (ADS)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  1. Open Labware: 3-D Printing Your Own Lab Equipment

    PubMed Central

    Baden, Tom; Chagas, Andre Maia; Gage, Greg; Marzullo, Timothy; Prieto-Godino, Lucia L.; Euler, Thomas

    2015-01-01

    The introduction of affordable, consumer-oriented 3-D printers is a milestone in the current “maker movement,” which has been heralded as the next industrial revolution. Combined with free and open sharing of detailed design blueprints and accessible development tools, rapid prototypes of complex products can now be assembled in one’s own garage—a game-changer reminiscent of the early days of personal computing. At the same time, 3-D printing has also allowed the scientific and engineering community to build the “little things” that help a lab get up and running much faster and easier than ever before. PMID:25794301

  2. Improved marathon performance by in-race nutritional strategy intervention.

    PubMed

    Hansen, Ernst Albin; Emanuelsen, Anders; Gertsen, Robert Mørkegaard; Sørensen S, S R

    2014-12-01

    It was tested whether a marathon was completed faster by applying a scientifically based rather than a freely chosen nutritional strategy. Furthermore, gastrointestinal symptoms were evaluated. Nonelite runners performed a 10 km time trial 7 weeks before Copenhagen Marathon 2013 for estimation of running ability. Based on the time, runners were divided into two similar groups that eventually should perform the marathon by applying the two nutritional strategies. Matched pairs design was applied. Before the marathon, runners were paired based on their prerace running ability. Runners applying the freely chosen nutritional strategy (n = 14; 33.6 ± 9.6 years; 1.83 ± 0.09 m; 77.4 ± 10.6 kg; 45:40 ± 4:32 min for 10 km) could freely choose their in-race intake. Runners applying the scientifically based nutritional strategy (n = 14; 41.9 ± 7.6 years; 1.79 ± 0.11 m; 74.6 ± 14.5 kg; 45:44 ± 4:37 min) were targeting a combined in-race intake of energy gels and water, where the total intake amounted to approximately 0.750 L water, 60 g maltodextrin and glucose, 0.06 g sodium, and 0.09 g caffeine per hr. Gastrointestinal symptoms were assessed by a self-administered postrace questionnaire. Marathon time was 3:49:26 ± 0:25:05 and 3:38:31 ± 0:24:54 hr for runners applying the freely chosen and the scientifically based strategy, respectively (p = .010, effect size=-0.43). Certain runners experienced diverse serious gastrointestinal symptoms, but overall, symptoms were low and not different between groups (p > .05). In conclusion, nonelite runners completed a marathon on average 10:55 min, corresponding to 4.7%, faster by applying a scientifically based rather than a freely chosen nutritional strategy. Furthermore, average values of gastrointestinal symptoms were low and not different between groups.

  3. Towards Clinical Molecular Diagnosis of Inherited Cardiac Conditions: A Comparison of Bench-Top Genome DNA Sequencers

    PubMed Central

    Wilkinson, Samuel L.; John, Shibu; Walsh, Roddy; Novotny, Tomas; Valaskova, Iveta; Gupta, Manu; Game, Laurence; Barton, Paul J R.; Cook, Stuart A.; Ware, James S.

    2013-01-01

    Background Molecular genetic testing is recommended for diagnosis of inherited cardiac disease, to guide prognosis and treatment, but access is often limited by cost and availability. Recently introduced high-throughput bench-top DNA sequencing platforms have the potential to overcome these limitations. Methodology/Principal Findings We evaluated two next-generation sequencing (NGS) platforms for molecular diagnostics. The protein-coding regions of six genes associated with inherited arrhythmia syndromes were amplified from 15 human samples using parallelised multiplex PCR (Access Array, Fluidigm), and sequenced on the MiSeq (Illumina) and Ion Torrent PGM (Life Technologies). Overall, 97.9% of the target was sequenced adequately for variant calling on the MiSeq, and 96.8% on the Ion Torrent PGM. Regions missed tended to be of high GC-content, and most were problematic for both platforms. Variant calling was assessed using 107 variants detected using Sanger sequencing: within adequately sequenced regions, variant calling on both platforms was highly accurate (Sensitivity: MiSeq 100%, PGM 99.1%. Positive predictive value: MiSeq 95.9%, PGM 95.5%). At the time of the study the Ion Torrent PGM had a lower capital cost and individual runs were cheaper and faster. The MiSeq had a higher capacity (requiring fewer runs), with reduced hands-on time and simpler laboratory workflows. Both provide significant cost and time savings over conventional methods, even allowing for adjunct Sanger sequencing to validate findings and sequence exons missed by NGS. Conclusions/Significance MiSeq and Ion Torrent PGM both provide accurate variant detection as part of a PCR-based molecular diagnostic workflow, and provide alternative platforms for molecular diagnosis of inherited cardiac conditions. Though there were performance differences at this throughput, platforms differed primarily in terms of cost, scalability, protocol stability and ease of use. Compared with current molecular genetic diagnostic tests for inherited cardiac arrhythmias, these NGS approaches are faster, less expensive, and yet more comprehensive. PMID:23861798

  4. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  5. Simultaneous acquisition for T2 -T2 Exchange and T1 -T2 correlation NMR experiments

    NASA Astrophysics Data System (ADS)

    Montrazi, Elton T.; Lucas-Oliveira, Everton; Araujo-Ferreira, Arthur G.; Barsi-Andreeta, Mariane; Bonagamba, Tito J.

    2018-04-01

    The NMR measurements of longitudinal and transverse relaxation times and its multidimensional correlations provide useful information about molecular dynamics. However, these experiments are very time-consuming, and many researchers proposed faster experiments to reduce this issue. This paper presents a new way to simultaneously perform T2 -T2 Exchange and T1 -T2 correlation experiments by taking the advantage of the storage time and the two steps phase cycling used for running the relaxation exchange experiment. The data corresponding to each step is either summed or subtracted to produce the T2 -T2 and T1 -T2 data, enhancing the information obtained while maintaining the experiment duration. Comparing the results from this technique with traditional NMR experiments it was possible to validate the method.

  6. Transportation and the environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchins, J.G.B.

    1977-01-01

    The long-run, often tortuous, adjustment of a society to fundamental changes in the transport system is discussed. This is a type of change that occurs over very long time periods. Therefore, at times, the perspective of the economic historian is adopted. Change occurs faster in some countries than in others because of differentials in income, technology, enterprise systems, public policies, and above all, in propensities to accept change. The United States probably has a higher propensity in this regard than Europe, and Europe than many other parts of the world. The author here, however, deals with reaction patterns which, overmore » a period, drastically change society. Thus, much of the analysis is based on developments in the U.S., but some attempt has been made to note European developments also. The U.S. is undoubtedly the most mobile society of all time.« less

  7. Toward transient finite element simulation of thermal deformation of machine tools in real-time

    NASA Astrophysics Data System (ADS)

    Naumann, Andreas; Ruprecht, Daniel; Wensch, Joerg

    2018-01-01

    Finite element models without simplifying assumptions can accurately describe the spatial and temporal distribution of heat in machine tools as well as the resulting deformation. In principle, this allows to correct for displacements of the Tool Centre Point and enables high precision manufacturing. However, the computational cost of FE models and restriction to generic algorithms in commercial tools like ANSYS prevents their operational use since simulations have to run faster than real-time. For the case where heat diffusion is slow compared to machine movement, we introduce a tailored implicit-explicit multi-rate time stepping method of higher order based on spectral deferred corrections. Using the open-source FEM library DUNE, we show that fully coupled simulations of the temperature field are possible in real-time for a machine consisting of a stock sliding up and down on rails attached to a stand.

  8. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Limited Transfer of Newly Acquired Movement Patterns across Walking and Running in Humans

    PubMed Central

    Ogawa, Tetsuya; Kawashima, Noritaka; Ogata, Toru; Nakazawa, Kimitaka

    2012-01-01

    The two major modes of locomotion in humans, walking and running, may be regarded as a function of different speed (walking as slower and running as faster). Recent results using motor learning tasks in humans, as well as more direct evidence from animal models, advocate for independence in the neural control mechanisms underlying different locomotion tasks. In the current study, we investigated the possible independence of the neural mechanisms underlying human walking and running. Subjects were tested on a split-belt treadmill and adapted to walking or running on an asymmetrically driven treadmill surface. Despite the acquisition of asymmetrical movement patterns in the respective modes, the emergence of asymmetrical movement patterns in the subsequent trials was evident only within the same modes (walking after learning to walk and running after learning to run) and only partial in the opposite modes (walking after learning to run and running after learning to walk) (thus transferred only limitedly across the modes). Further, the storage of the acquired movement pattern in each mode was maintained independently of the opposite mode. Combined, these results provide indirect evidence for independence in the neural control mechanisms underlying the two locomotive modes. PMID:23029490

  10. Performance differences between sexes in 50-mile to 3,100-mile ultramarathons.

    PubMed

    Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A

    2015-01-01

    Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey-Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r (2)=0.0039, P=0.91) and the ten fastest ever (r (2)=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%-20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances.

  11. Performance differences between sexes in 50-mile to 3,100-mile ultramarathons

    PubMed Central

    Zingg, Matthias A; Knechtle, Beat; Rosemann, Thomas; Rüst, Christoph A

    2015-01-01

    Anecdotal reports have assumed that women would be able to outrun men in long-distance running. The aim of this study was to test this assumption by investigating the changes in performance difference between sexes in the best ultramarathoners in 50-mile, 100-mile, 200-mile, 1,000-mile, and 3,100-mile events held worldwide between 1971 and 2012. The sex differences in running speed for the fastest runners ever were analyzed using one-way analysis of variance with subsequent Tukey–Kramer posthoc analysis. Changes in sex difference in running speed of the annual fastest were analyzed using linear and nonlinear regression analyses, correlation analyses, and mixed-effects regression analyses. The fastest men ever were faster than the fastest women ever in 50-mile (17.5%), 100-mile (17.4%), 200-mile (9.7%), 1,000-mile (20.2%), and 3,100-mile (18.6%) events. For the ten fastest finishers ever, men were faster than women in 50-mile (17.1%±1.9%), 100-mile (19.2%±1.5%), and 1,000-mile (16.7%±1.6%) events. No correlation existed between sex difference and running speed for the fastest ever (r2=0.0039, P=0.91) and the ten fastest ever (r2=0.15, P=0.74) for all distances. For the annual fastest, the sex difference in running speed decreased linearly in 50-mile events from 14.6% to 8.9%, remained unchanged in 100-mile (18.0%±8.4%) and 1,000-mile (13.7%±9.1%) events, and increased in 3,100-mile events from 12.5% to 16.9%. For the annual ten fastest runners, the performance difference between sexes decreased linearly in 50-mile events from 31.6%±3.6% to 8.9%±1.8% and in 100-mile events from 26.0%±4.4% to 24.7%±0.9%. To summarize, the fastest men were ~17%–20% faster than the fastest women for all distances from 50 miles to 3,100 miles. The linear decrease in sex difference for 50-mile and 100-mile events may suggest that women are reducing the sex gap for these distances. PMID:25653567

  12. Australia's marine virtual laboratory

    NASA Astrophysics Data System (ADS)

    Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe

    2014-05-01

    In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.

  13. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment

    PubMed Central

    Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166

  14. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment.

    PubMed

    Li, Jianjun; Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.

  15. Improved dewpoint-probe calibration

    NASA Technical Reports Server (NTRS)

    Stephenson, J. G.; Theodore, E. A.

    1978-01-01

    Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.

  16. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection.

    PubMed

    Toofanny, Rudesh D; Simms, Andrew M; Beck, David A C; Daggett, Valerie

    2011-08-10

    Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008.

  17. Implementation of 3D spatial indexing and compression in a large-scale molecular dynamics simulation database for rapid atomic contact detection

    PubMed Central

    2011-01-01

    Background Molecular dynamics (MD) simulations offer the ability to observe the dynamics and interactions of both whole macromolecules and individual atoms as a function of time. Taken in context with experimental data, atomic interactions from simulation provide insight into the mechanics of protein folding, dynamics, and function. The calculation of atomic interactions or contacts from an MD trajectory is computationally demanding and the work required grows exponentially with the size of the simulation system. We describe the implementation of a spatial indexing algorithm in our multi-terabyte MD simulation database that significantly reduces the run-time required for discovery of contacts. The approach is applied to the Dynameomics project data. Spatial indexing, also known as spatial hashing, is a method that divides the simulation space into regular sized bins and attributes an index to each bin. Since, the calculation of contacts is widely employed in the simulation field, we also use this as the basis for testing compression of data tables. We investigate the effects of compression of the trajectory coordinate tables with different options of data and index compression within MS SQL SERVER 2008. Results Our implementation of spatial indexing speeds up the calculation of contacts over a 1 nanosecond (ns) simulation window by between 14% and 90% (i.e., 1.2 and 10.3 times faster). For a 'full' simulation trajectory (51 ns) spatial indexing reduces the calculation run-time between 31 and 81% (between 1.4 and 5.3 times faster). Compression resulted in reduced table sizes but resulted in no significant difference in the total execution time for neighbour discovery. The greatest compression (~36%) was achieved using page level compression on both the data and indexes. Conclusions The spatial indexing scheme significantly decreases the time taken to calculate atomic contacts and could be applied to other multidimensional neighbor discovery problems. The speed up enables on-the-fly calculation and visualization of contacts and rapid cross simulation analysis for knowledge discovery. Using page compression for the atomic coordinate tables and indexes saves ~36% of disk space without any significant decrease in calculation time and should be considered for other non-transactional databases in MS SQL SERVER 2008. PMID:21831299

  18. Effects of Run-Up Velocity on Performance, Kinematics, and Energy Exchanges in The Pole Vault

    PubMed Central

    Linthorne, Nicholas P.; Weetman, A. H. Gemma

    2012-01-01

    This study examined the effect of run-up velocity on the peak height achieved by the athlete in the pole vault and on the corresponding changes in the athlete's kinematics and energy exchanges. Seventeen jumps by an experienced male pole vaulter were video recorded in the sagittal plane and a wide range of run-up velocities (4.5-8.5 m/s) was obtained by setting the length of the athlete's run-up (2-16 steps). A selection of performance variables, kinematic variables, energy variables, and pole variables were calculated from the digitized video data. We found that the athlete's peak height increased linearly at a rate of 0.54 m per 1 m/s increase in run-up velocity and this increase was achieved through a combination of a greater grip height and a greater push height. At the athlete's competition run-up velocity (8.4 m/s) about one third of the rate of increase in peak height arose from an increase in grip height and about two thirds arose from an increase in push height. Across the range of run-up velocities examined here the athlete always performed the basic actions of running, planting, jumping, and inverting on the pole. However, he made minor systematic changes to his jumping kinematics, vaulting kinematics, and selection of pole characteristics as the run-up velocity increased. The increase in run-up velocity and changes in the athlete's vaulting kinematics resulted in substantial changes to the magnitudes of the energy exchanges during the vault. A faster run-up produced a greater loss of energy during the take-off, but this loss was not sufficient to negate the increase in run-up velocity and the increase in work done by the athlete during the pole support phase. The athlete therefore always had a net energy gain during the vault. However, the magnitude of this gain decreased slightly as run-up velocity increased. Key pointsIn the pole vault the optimum technique is to run-up as fast as possible.The athlete's vault height increases at a rate of about 0.5 m per 1 m/s increase in run-up velocity.The increase in vault height is achieved through a greater grip height and a greater push height. At the athlete's competition run-up velocity about one third of the rate of increase in vault height arises from an increase in grip height and two thirds arises from an increase in push height.The athlete has a net energy gain during the vault. A faster run-up velocity produces a greater loss of energy during the take-off but this loss of energy is not sufficient to negate the increase in run-up velocity and the increase in the work done by the athlete during the pole support phase. PMID:24149197

  19. Evaluation of the effects of supplementation with Pycnogenol® on fitness in normal subjects with the Army Physical Fitness Test and in performances of athletes in the 100-minute triathlon.

    PubMed

    Vinciguerra, G; Belcaro, G; Bonanni, E; Cesarone, M R; Rotondi, V; Ledda, A; Hosoi, M; Dugall, M; Cacchio, M; Cornelli, U

    2013-12-01

    The aim of this registry study was to evaluate the effects of Pycnogenol® (French pine bark extract) on improving physical fitness (PF) in normal individuals using the Army Physical Fitness Test (APFT). The study evaluated the efficacy of Pycnogenol, used as a supplement, in improving training, exercise, recovery and oxidative stress. The study was divided into 2 parts. In PART 1 (Pycnogenol 100 mg/day), the APFT was used to assess an improvement in PF during an 8-week preparation and training program. In PART 2 (Pycnogenol 150 mg/day), the study evaluated the effects of Pycnogenol supplementation in athletes in training for a triathlon. PART 1. There was a significant improvement in both males and females in the 2-mile running time within both groups, but the group using Pycnogenol (74 subjects) performed statistically better than controls (73 subjects). The number of push-ups was improved, with Pycnogenol subjects performing better. Sit-ups also improved in the Pycnogenol group. Oxidative stress decreased with exercise in all subjects; in Pycnogenol subjects the results were significantly better. PART 2. In the Pycnogenol group 32 males (37.9; SD 4.4 years) were compliant with the training plan at 4 weeks. In controls there were 22 subjects (37.2;3.5) completing the training plans. The swimming, biking and running scores in both groups improved with training. The Pycnogenol group had more benefits in comparison with controls. The total triathlon time was 89 min 44 s in Pycnogenol subjects versus 96 min 5 s in controls. Controls improved their performing time on average 4.6 minutes in comparison with an improvement of 10.8 minutes in Pycnogenol subjects. A significant decrease in cramps and running and post-running pain was seen in the Pycnogenol group; there were no significant differences in controls. There was an important, significant post-triathlon decrease of PFR one hour after the end of the triathlon with an average of -26.7, whereas PFR in controls increased. In Pycnogenol subjects there was a lower increase on oxidative stress with a faster recovery to almost normal levels (<330 for these subjects). These variations in PFR values were interpreted as a faster metabolic recovery in subjects using Pycnogenol. This study opens an interesting new application of the natural supplementation with Pycnogenol that, with proper hydration, good training and nutritional attention may improve training and performances both in normal subjects and in semi-professional athletes performing at high levels in difficult, high-stress sports such as the triathlon.

  20. MSAProbs-MPI: parallel multiple sequence aligner for distributed-memory systems.

    PubMed

    González-Domínguez, Jorge; Liu, Yongchao; Touriño, Juan; Schmidt, Bertil

    2016-12-15

    MSAProbs is a state-of-the-art protein multiple sequence alignment tool based on hidden Markov models. It can achieve high alignment accuracy at the expense of relatively long runtimes for large-scale input datasets. In this work we present MSAProbs-MPI, a distributed-memory parallel version of the multithreaded MSAProbs tool that is able to reduce runtimes by exploiting the compute capabilities of common multicore CPU clusters. Our performance evaluation on a cluster with 32 nodes (each containing two Intel Haswell processors) shows reductions in execution time of over one order of magnitude for typical input datasets. Furthermore, MSAProbs-MPI using eight nodes is faster than the GPU-accelerated QuickProbs running on a Tesla K20. Another strong point is that MSAProbs-MPI can deal with large datasets for which MSAProbs and QuickProbs might fail due to time and memory constraints, respectively. Source code in C ++ and MPI running on Linux systems as well as a reference manual are available at http://msaprobs.sourceforge.net CONTACT: jgonzalezd@udc.esSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  2. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGES

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  3. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  4. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  5. Measuring Sizes & Shapes of Galaxies

    NASA Astrophysics Data System (ADS)

    Kusmic, Samir; Willemn Holwerda, Benne

    2018-01-01

    Software is how galaxy morphometrics are calculated, cutting down on time needed to categorize galaxies. However, new surveys coming in the next decade is expected to count upwards of a thousand times more galaxies than with current surveys. This issue would create longer time consumption just processing data. In this research, we looked into how we can reduce the time it takes to get morphometric parameters in order to classify galaxies, but also how precise we can get with other findings. The software of choice is Source Extractor, known for taking a short amount of time, as well as being recently updated to get compute morphometric parameters. This test is being done by running CANDELS data, five fields in the J and H filters, through Source Extractor and then cross-correlating the new catalog with one created with GALFIT, obtained from van der Wel et al. 2014, and then with spectroscopic redshift data. With Source Extractor, we look at how many galaxies counted, how precise the computation, how to classify morphometry, and how the results stand with other findings. The run-time was approximately 10 hours when cross-correlated with GALFIT and approximately 8 hours with the spectroscopic redshift; these were expected times as Source Extractor and already faster than GALFIT's run-time by a large factor. As well, Source Extractor's recovery was large: 79.24\\% of GALFIT's count. However, the precision is highly variable. We have created two thresholds to see which would be better in order to combat this;we ended up picking an unbiased isophotal area threshold as the better choice. Still, with such a threshold, spread was relatively wide. However, comparing the parameters with redshift showed agreeable findings, however, not necessarily to the numerical value. From the results, we see Source Extractor as a good first-look, to be followed up by other software.

  6. Habituation contributes to within-session changes in free wheel running.

    PubMed Central

    Aoyama, K; McSweeney, F K

    2001-01-01

    Three experiments tested the hypothesis that habituation contributes to the regulation of wheel running. Rats ran in a wheel for 30-min sessions. Experiment 1 demonstrated spontaneous recovery. Rats ran more and the within-session decreases in running were smaller after 2 days of wheel deprivation than after 1 day. Experiment 2 demonstrated dishabituation. Running rate increased immediately after the termination of a brief extra event (application of the brake or flashing of the houselight). Experiment 3 demonstrated stimulus specificity. Rats completed the second half of the session in either the same wheel as the first half, or a different wheel. Second-half running was faster in the latter case. Within-session patterns of running were well described by equations that describe data from the habituation, motivation, and operant literatures. These results suggest that habituation contributes to the regulation of running. In fact, habituation provides a better explanation for the termination of wheel running than fatigue, the variable to which this termination is usually attributed. Overall, the present findings are consistent with the proposition that habituation and sensitization contribute to the regulation of several forms of motivated behavior. PMID:11768712

  7. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  8. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  9. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  10. Development of a Stiffness-Based Chemistry Load Balancing Scheme, and Optimization of Input/Output and Communication, to Enable Massively Parallel High-Fidelity Internal Combustion Engine Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodavasal, Janardhan; Harms, Kevin; Srivastava, Priyesh

    A closed-cycle gasoline compression ignition engine simulation near top dead center (TDC) was used to profile the performance of a parallel commercial engine computational fluid dynamics code, as it was scaled on up to 4096 cores of an IBM Blue Gene/Q supercomputer. The test case has 9 million cells near TDC, with a fixed mesh size of 0.15 mm, and was run on configurations ranging from 128 to 4096 cores. Profiling was done for a small duration of 0.11 crank angle degrees near TDC during ignition. Optimization of input/output performance resulted in a significant speedup in reading restart files, andmore » in an over 100-times speedup in writing restart files and files for post-processing. Improvements to communication resulted in a 1400-times speedup in the mesh load balancing operation during initialization, on 4096 cores. An improved, “stiffness-based” algorithm for load balancing chemical kinetics calculations was developed, which results in an over 3-times faster run-time near ignition on 4096 cores relative to the original load balancing scheme. With this improvement to load balancing, the code achieves over 78% scaling efficiency on 2048 cores, and over 65% scaling efficiency on 4096 cores, relative to 256 cores.« less

  11. Battery Cell Balancing Optimisation for Battery Management System

    NASA Astrophysics Data System (ADS)

    Yusof, M. S.; Toha, S. F.; Kamisan, N. A.; Hashim, N. N. W. N.; Abdullah, M. A.

    2017-03-01

    Battery cell balancing in every electrical component such as home electronic equipment and electric vehicle is very important to extend battery run time which is simplified known as battery life. The underlying solution to equalize the balance of cell voltage and SOC between the cells when they are in complete charge. In order to control and extend the battery life, the battery cell balancing is design and manipulated in such way as well as shorten the charging process. Active and passive cell balancing strategies as a unique hallmark enables the balancing of the battery with the excellent performances configuration so that the charging process will be faster. The experimental and simulation covers an analysis of how fast the battery can balance for certain time. The simulation based analysis is conducted to certify the use of optimisation in active or passive cell balancing to extend battery life for long periods of time.

  12. Edit distance for marked point processes revisited: An implementation by binary integer programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito; Aihara, Kazuyuki

    2015-12-15

    We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process ismore » large.« less

  13. Evaluation of nonlinear structural dynamic responses using a fast-running spring-mass formulation

    NASA Astrophysics Data System (ADS)

    Benjamin, A. S.; Altman, B. S.; Gruda, J. D.

    In today's world, accurate finite-element simulations of large nonlinear systems may require meshes composed of hundreds of thousands of degrees of freedom. Even with today's fast computers and the promise of ever-faster ones in the future, central processing unit (CPU) expenditures for such problems could be measured in days. Many contemporary engineering problems, such as those found in risk assessment, probabilistic structural analysis, and structural design optimization, cannot tolerate the cost or turnaround time for such CPU-intensive analyses, because these applications require a large number of cases to be run with different inputs. For many risk assessment applications, analysts would prefer running times to be measurable in minutes. There is therefore a need for approximation methods which can solve such problems far more efficiently than the very detailed methods and yet maintain an acceptable degree of accuracy. For this purpose, we have been working on two methods of approximation: neural networks and spring-mass models. This paper presents our work and results to date for spring-mass modeling and analysis, since we are further along in this area than in the neural network formulation. It describes the physical and numerical models contained in a code we developed called STRESS, which stands for 'Spring-mass Transient Response Evaluation for structural Systems'. The paper also presents results for a demonstration problem, and compares these with results obtained for the same problem using PRONTO3D, a state-of-the-art finite element code which was also developed at Sandia.

  14. Forefoot running improves pain and disability associated with chronic exertional compartment syndrome.

    PubMed

    Diebal, Angela R; Gregory, Robert; Alitz, Curtis; Gerber, J Parry

    2012-05-01

    Anterior compartment pressures of the leg as well as kinematic and kinetic measures are significantly influenced by running technique. It is unknown whether adopting a forefoot strike technique will decrease the pain and disability associated with chronic exertional compartment syndrome (CECS) in hindfoot strike runners. For people who have CECS, adopting a forefoot strike running technique will lead to decreased pain and disability associated with this condition. Case series; Level of evidence, 4. Ten patients with CECS indicated for surgical release were prospectively enrolled. Resting and postrunning compartment pressures, kinematic and kinetic measurements, and self-report questionnaires were taken for all patients at baseline and after 6 weeks of a forefoot strike running intervention. Run distance and reported pain levels were recorded. A 15-point global rating of change (GROC) scale was used to measure perceived change after the intervention. After 6 weeks of forefoot run training, mean postrun anterior compartment pressures significantly decreased from 78.4 ± 32.0 mm Hg to 38.4 ± 11.5 mm Hg. Vertical ground-reaction force and impulse values were significantly reduced. Running distance significantly increased from 1.4 ± 0.6 km before intervention to 4.8 ± 0.5 km 6 weeks after intervention, while reported pain while running significantly decreased. The Single Assessment Numeric Evaluation (SANE) significantly increased from 49.9 ± 21.4 to 90.4 ± 10.3, and the Lower Leg Outcome Survey (LLOS) significantly increased from 67.3 ± 13.7 to 91.5 ± 8.5. The GROC scores at 6 weeks after intervention were between 5 and 7 for all patients. One year after the intervention, the SANE and LLOS scores were greater than reported during the 6-week follow-up. Two-mile run times were also significantly faster than preintervention values. No patient required surgery. In 10 consecutive patients with CECS, a 6-week forefoot strike running intervention led to decreased postrunning lower leg intracompartmental pressures. Pain and disability typically associated with CECS were greatly reduced for up to 1 year after intervention. Surgical intervention was avoided for all patients.

  15. Microbial community dynamics and biogas production from manure fractions in sludge bed anaerobic digestion.

    PubMed

    Nordgård, A S R; Bergland, W H; Bakke, R; Vadstein, O; Østgaard, K; Bakke, I

    2015-12-01

    To elucidate how granular sludge inoculum and particle-rich organic loading affect the structure of the microbial communities and process performance in upflow anaerobic sludge bed (UASB) reactors. We investigated four reactors run on dairy manure filtrate and four on pig manure supernatant for three months achieving similar methane yields. The reactors fed with less particle rich pig manure stabilized faster and had highest capacity. Microbial community dynamics analysed by a PCR/denaturing gradient gel electrophoresis approach showed that influent was a major determinant for the composition of the reactor communities. Comparisons of pre- and non-adapted inoculum in the reactors run on pig manure supernatant showed that the community structure of the nonadapted inoculum adapted in approximately two months. Microbiota variance partitioning analysis revealed that running time, organic loading rate and inoculum together explained 26 and 31% of the variance in bacterial and archaeal communities respectively. The microbial communities of UASBs adapted to the reactor conditions in treatment of particle rich manure fractions, obtaining high capacity, especially on pig manure supernatant. These findings provide relevant insight into the microbial community dynamics in startup and operation of sludge bed reactors for methane production from slurry fractions, a major potential source of biogas. © 2015 The Society for Applied Microbiology.

  16. Differential rates of feldspar weathering in granitic regoliths

    USGS Publications Warehouse

    White, A.F.; Bullen, T.D.; Schulz, M.S.; Blum, A.E.; Huntington, T.G.; Peters, N.E.

    2001-01-01

    Differential rates of plagioclase and K-feldspar weathering commonly observed in bedrock and soil environments are examined in terms of chemical kinetic and solubility controls and hydrologic permeability. For the Panola regolith, in the Georgia Piedmont Province of southeastern United States, petrographic observations, coupled with elemental balances and 87Sr/86Sr ratios, indicate that plagioclase is being converted to kaolinite at depths > 6 m in the granitic bedrock. K-feldspar remains pristine in the bedrock but subsequently weathers to kaolinite at the overlying saprolite. In contrast, both plagioclase and K-feldspar remain stable in granitic bedrocks elsewhere in Piedmont Province, such as Davis Run, Virginia, where feldspars weather concurrently in an overlying thick saprolite sequence. Kinetic rate constants, mineral surface areas, and secondary hydraulic conductivities are fitted to feldspar losses with depth in the Panola and Davis Run regoliths using a time-depth computer spreadsheet model. The primary hydraulic conductivities, describing the rates of meteoric water penetration into the pristine granites, are assumed to be equal to the propagation rates of weathering fronts, which, based on cosmogenic isotope dating, are 7 m/106 yr for the Panola regolith and 4 m/106 yr for the Davis Run regolith. Best fits in the calculations indicate that the kinetic rate constants for plagioclase in both regoliths are factors of two to three times faster than K-feldspar, which is in agreement with experimental findings. However, the range for plagioclase and K-feldspar rates (kr = 1.5 x 10-17 to 2.8 x 10-16 mol m-2 s-1) is three to four orders of magnitude lower than for that for experimental feldspar dissolution rates and are among the slowest yet recorded for natural feldspar weathering. Such slow rates are attributed to the relatively old geomorphic ages of the Panola and Davis Run regoliths, implying that mineral surface reactivity decreases significantly with time. Differential feldspar weathering in the low-permeability Panola bedrock environment is more dependent on relative feldspar solubilities than on differences in kinetic reaction rates. Such weathering is very sensitive to primary and secondary hydraulic conductivities (qp and qs), which control both the fluid volumes passing through the regolith and the thermodynamic saturation of the feldspars. Bedrock permeability is primarily intragranular and is created by internal weathering of networks of interconnected plagioclase phenocrysts. Saprolite permeability is principally intergranular and is the result of dissolution of silicate phases during isovolumetric weathering. A secondary to primary hydraulic conductivity ratio of qs/qp = 150 in the Panola bedrock results in kinetically controlled plagioclase dissolution but thermodynamically inhibited K-feldspar reaction. This result is in accord with calculated chemical saturation states for groundwater sampled in the Panola Granite. In contrast, greater secondary conductivities in the Davis Run saprolite, qs/qp = 800, produces both kinetically controlled plagioclase and K-feldspar dissolution. Faster plagioclase reaction, leading to bedrock weathering in the Panola Granite but not at Davis Run, is attributed to a higher anorthite component of the plagioclase and a wetter and warmer climate. In addition, the Panola Granite has an abnormally high content of disseminated calcite, the dissolution of which precedes the plagioclase weathering front, thus creating additional secondary permeability. Copyright ?? 2001 Elsevier Science Ltd.

  17. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  18. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  19. Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Kaine, Greg

    2002-11-01

    In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.

  20. Matching optical flow to motor speed in virtual reality while running on a treadmill

    PubMed Central

    Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564

  1. Matching optical flow to motor speed in virtual reality while running on a treadmill.

    PubMed

    Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.

  2. Design of a lamella settler for biomass recycling in continuous ethanol fermentation process.

    PubMed

    Tabera, J; Iznaola, M A

    1989-04-20

    The design and application of a settler to a continuous fermentation process with yeast recycle were studied. The compact lamella-type settler was chosen to avoid large volumes associated with conventional settling tanks. A rationale of the design method is covered. The sedimentation area was determined by classical batch settling rate tests and sedimentation capacity calculation. Limitations on the residence time of the microorganisms in the settler, rather than sludge thickening considerations, was the approach employed for volume calculation. Fermentation rate tests with yeast after different sedimentation periods were carried out to define a suitable residence time. Continuous cell recycle fermentation runs, performed with the old and new sedimentation devices, show that lamella settler improves biomass recycling efficiency, being the process able to operate at higher sugar concentrations and faster dilution rates.

  3. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  4. LETTER TO THE EDITOR: Optimization of partial search

    NASA Astrophysics Data System (ADS)

    Korepin, Vladimir E.

    2005-11-01

    A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.

  5. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.

  6. The MSRC ab initio methods benchmark suite: A measurement of hardware and software performance in the area of electronic structure methods

    NASA Astrophysics Data System (ADS)

    Feller, D. F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  7. Open surgical treatment for chronic midportion Achilles tendinopathy: faster recovery with the soleus fibres transfer technique.

    PubMed

    Benazzo, Francesco; Zanon, Giacomo; Klersy, Catherine; Marullo, Matteo

    2016-06-01

    The study aimed to compare two methods of open surgical treatment for midportion Achilles tendinopathy in sportsmen. A novel technique consisting in transferring some soleus fibres into the degenerated tendon to improve its vascularization and longitudinal tenotomies are evaluated and compared. From 2006 to 2011, fifty-two competitive and noncompetitive athletes affected by midportion Achilles tendinopathy were surgically treated and prospectively evaluated at 6 months and at a final 4-year mean follow-up. Twenty patients had longitudinal tenotomies, and thirty-two had soleus fibres transfer. Clinical outcome was evaluated by the American Orthopaedic Foot and Ankle Society (AOFAS) score and the Victorian Institute of Sports Assessment-Achilles (VISA-A) score. Time to return to walk and to run and tendon thickening were also recorded. Patients in the soleus transfer group had a higher increase in AOFAS and VISA-A score at 6 months and at the mean 4-year final follow-up (by 5.4 points, 95 % CI 2.9-7.9, p < 0.001 and by 5.7 points, 95 % CI 2.5-8.9, p = 0.001, for AOFAS and VISA, respectively). They also needed less time to return to run: 98.9 ± 17.4 days compared to 122.2 ± 26.3 days for the longitudinal tenotomies group (p = 0.0019). The soleus transfer group had a greater prevalence of tendon thickening (59.4 % compared to 30.0 % in the longitudinal tenotomies group, p = 0.037). Open surgery for midportion Achilles tendinopathy is safe and effective in medium term. Despite similar outcomes in postoperative functional scores, soleus transfer allows a faster recovery but has a higher incidence of tendon thickening. These results should suggest the use of the soleus graft technique in high-level athletes. Prospective comparative study, Level II.

  8. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  9. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

    PubMed

    Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik

    2016-11-11

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

  10. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    PubMed Central

    Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik

    2016-01-01

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717

  11. The new and computationally efficient MIL-SOM algorithm: potential benefits for visualization and analysis of a large-scale high-dimensional clinically acquired geographic data.

    PubMed

    Oyana, Tonny J; Achenie, Luke E K; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.

  12. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  13. Voluntary resistance running wheel activity pattern and skeletal muscle growth in rats.

    PubMed

    Legerlotz, Kirsten; Elliott, Bradley; Guillemin, Bernard; Smith, Heather K

    2008-06-01

    The aims of this study were to characterize the pattern of voluntary activity of young rats in response to resistance loading on running wheels and to determine the effects of the activity on the growth of six limb skeletal muscles. Male Sprague-Dawley rats (4 weeks old) were housed individually with a resistance running wheel (R-RUN, n = 7) or a conventional free-spinning running wheel (F-RUN, n = 6) or without a wheel, as non-running control animals (CON, n = 6). The torque required to move the wheel in the R-RUN group was progressively increased, and the activity (velocity, distance and duration of each bout) of the two running wheel groups was recorded continuously for 45 days. The R-RUN group performed many more, shorter and faster bouts of running than the F-RUN group, yet the mean daily distance was not different between the F-RUN (1.3 +/- 0.2 km) and R-RUN group (1.4 +/- 0.6 km). Only the R-RUN resulted in a significantly (P < 0.05) enhanced muscle wet mass, relative to the increase in body mass, of the plantaris (23%) and vastus lateralis muscle (17%), and the plantaris muscle fibre cross-sectional area, compared with CON. Both F-RUN and R-RUN led to a significantly greater wet mass relative to increase in body mass and muscle fibre cross-sectional area in the soleus muscle compared with CON. We conclude that the pattern of voluntary activity on a resistance running wheel differs from that on a free-spinning running wheel and provides a suitable model to induce physiological muscle hypertrophy in rats.

  14. BIOENERGETIC DIFFERENCES DURING WALKING AND RUNNING IN TRANSFEMORAL AMPUTEE RUNNERS USING ARTICULATING AND NON-ARTICULATING KNEE PROSTHESES

    PubMed Central

    Highsmith, M. Jason; Kahle, Jason T.; Miro, Rebecca M.; Mengelkoch, Larry J.

    2016-01-01

    Transfemoral amputation (TFA) patients require considerably more energy to walk and run than non-amputees. The purpose of this study was to examine potential bioenergetic differences (oxygen uptake (VO2), heart rate (HR), and ratings of perceived exertion (RPE)) for TFA patients utilizing a conventional running prosthesis with an articulating knee mechanism versus a running prosthesis with a non-articulating knee joint. Four trained TFA runners (n = 4) were accommodated to and tested with both conditions. VO2 and HR were significantly lower (p ≤ 0.05) in five of eight fixed walking and running speeds for the prosthesis with an articulating knee mechanism. TFA demonstrated a trend for lower RPE at six of eight walking speeds using the prosthesis with the articulated knee condition. A trend was observed for self-selected walking speed, self-selected running speed, and maximal speed to be faster for TFA subjects using the prosthesis with the articulated knee condition. Finally, all four TFA participants subjectively preferred running with the prosthesis with the articulated knee condition. These findings suggest that, for trained TFA runners, a running prosthesis with an articulating knee prosthesis reduces ambulatory energy costs and enhances subjective perceptive measures compared to using a non-articulating knee prosthesis. PMID:28066524

  15. The mechanics and energetics of human walking and running: a joint level perspective.

    PubMed

    Farris, Dominic James; Sawicki, Gregory S

    2012-01-07

    Humans walk and run at a range of speeds. While steady locomotion at a given speed requires no net mechanical work, moving faster does demand both more positive and negative mechanical work per stride. Is this increased demand met by increasing power output at all lower limb joints or just some of them? Does running rely on different joints for power output than walking? How does this contribute to the metabolic cost of locomotion? This study examined the effects of walking and running speed on lower limb joint mechanics and metabolic cost of transport in humans. Kinematic and kinetic data for 10 participants were collected for a range of walking (0.75, 1.25, 1.75, 2.0 m s(-1)) and running (2.0, 2.25, 2.75, 3.25 m s(-1)) speeds. Net metabolic power was measured by indirect calorimetry. Within each gait, there was no difference in the proportion of power contributed by each joint (hip, knee, ankle) to total power across speeds. Changing from walking to running resulted in a significant (p = 0.02) shift in power production from the hip to the ankle which may explain the higher efficiency of running at speeds above 2.0 m s(-1) and shed light on a potential mechanism behind the walk-run transition.

  16. ALICE Expert System

    NASA Astrophysics Data System (ADS)

    Ionita, C.; Carena, F.

    2014-06-01

    The ALICE experiment at CERN employs a number of human operators (shifters), who have to make sure that the experiment is always in a state compatible with taking Physics data. Given the complexity of the system and the myriad of errors that can arise, this is not always a trivial task. The aim of this paper is to describe an expert system that is capable of assisting human shifters in the ALICE control room. The system diagnoses potential issues and attempts to make smart recommendations for troubleshooting. At its core, a Prolog engine infers whether a Physics or a technical run can be started based on the current state of the underlying sub-systems. A separate C++ component queries certain SMI objects and stores their state as facts in a Prolog knowledge base. By mining the data stored in different system logs, the expert system can also diagnose errors arising during a run. Currently the system is used by the on-call experts for faster response times, but we expect it to be adopted as a standard tool by regular shifters during the next data taking period.

  17. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Biesiadecki, Jeffrey J.; Ali, Khaled S.

    2008-01-01

    Visual Target Tracking (VTT) has been implemented in the new Mars Exploration Rover (MER) Flight Software (FSW) R9.2 release, which is now running on both Spirit and Opportunity rovers. Applying the normalized cross-correlation (NCC) algorithm with template image magnification and roll compensation on MER Navcam images, VTT tracks the target and enables the rover to approach the target within a few cm over a 10 m traverse. Each VTT update takes 1/2 to 1 minute on the rovers, 2-3 times faster than one Visual Odometry (Visodom) update. VTT is a key element to achieve a target approach and instrument placement over a 10-m run in a single sol in contrast to the original baseline of 3 sols. VTT has been integrated into the MER FSW so that it can operate with any combination of blind driving, Autonomous Navigation (Autonav) with hazard avoidance, and Visodom. VTT can either guide the rover towards the target or simply image the target as the rover drives by. Three recent VTT operational checkouts on Opportunity were all successful, tracking the selected target reliably within a few pixels.

  18. Performance analysis of a fault inferring nonlinear detection system algorithm with integrated avionics flight data

    NASA Technical Reports Server (NTRS)

    Caglayan, A. K.; Godiwala, P. M.; Morrell, F. R.

    1985-01-01

    This paper presents the performance analysis results of a fault inferring nonlinear detection system (FINDS) using integrated avionics sensor flight data for the NASA ATOPS B-737 aircraft in a Microwave Landing System (MLS) environment. First, an overview of the FINDS algorithm structure is given. Then, aircraft state estimate time histories and statistics for the flight data sensors are discussed. This is followed by an explanation of modifications made to the detection and decision functions in FINDS to improve false alarm and failure detection performance. Next, the failure detection and false alarm performance of the FINDS algorithm are analyzed by injecting bias failures into fourteen sensor outputs over six repetitive runs of the five minutes of flight data. Results indicate that the detection speed, failure level estimation, and false alarm performance show a marked improvement over the previously reported simulation runs. In agreement with earlier results, detection speed is faster for filter measurement sensors such as MLS than for filter input sensors such as flight control accelerometers. Finally, the progress in modifications of the FINDS algorithm design to accommodate flight computer constraints is discussed.

  19. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  20. Bike and run pacing on downhill segments predict Ironman triathlon relative success.

    PubMed

    Johnson, Evan C; Pryor, J Luke; Casa, Douglas J; Belval, Luke N; Vance, James S; DeMartini, Julie K; Maresh, Carl M; Armstrong, Lawrence E

    2015-01-01

    Determine if performance and physiological based pacing characteristics over the varied terrain of a triathlon predicted relative bike, run, and/or overall success. Poor self-regulation of intensity during long distance (Full Iron) triathlon can manifest in adverse discontinuities in performance. Observational study of a random sample of Ironman World Championship athletes. High performing and low performing groups were established upon race completion. Participants wore global positioning system and heart rate enabled watches during the race. Percentage difference from pre-race disclosed goal pace (%off) and mean HR were calculated for nine segments of the bike and 11 segments of the run. Normalized graded running pace (accounting for changes in elevation) was computed via analysis software. Step-wise regression analyses identified segments predictive of relative success and HP and LP were compared at these segments to confirm importance. %Off of goal velocity during two downhill segments of the bike (HP: -6.8±3.2%, -14.2±2.6% versus LP: -1.2±4.2%, -5.1±11.5%; p<0.020) and %off from NGP during one downhill segment of the run (HP: 4.8±5.2% versus LP: 33.3±38.7%; p=0.033) significantly predicted relative performance. Also, HP displayed more consistency in mean HR (141±12 to 138±11 bpm) compared to LP (139±17 to 131±16 bpm; p=0.019) over the climb and descent from the turn-around point during the bike component. Athletes who maintained faster relative speeds on downhill segments, and who had smaller changes in HR between consecutive up and downhill segments were more successful relative to their goal times. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  1. Interrelationship of CB1R and OBR pathways in regulation of metabolic, neuroendocrine, and behavioral responses to food restriction and voluntary wheel running

    PubMed Central

    Diane, Abdoulaye; Vine, Donna F.; Russell, James C.; Heth, C. Donald; Proctor, Spencer D.

    2014-01-01

    We hypothesized the cannabinoid-1 receptor and leptin receptor (ObR) operate synergistically to modulate metabolic, neuroendocrine, and behavioral responses of animals exposed to a survival challenge (food restriction and wheel running). Obese-prone (OP) JCR:LA-cp rats, lacking functional ObR, and lean-prone (LP) JCR:LA-cp rats (intact ObR) were assigned to OP-C and LP-C (control) or CBR1-antagonized (SR141716, 10 mg/kg body wt in food) OP-A and LP-A groups. After 32 days, all rats were exposed to 1.5-h daily meals without the drug and 22.5-h voluntary wheel running, a survival challenge that normally culminates in activity-based anorexia (ABA). Rats were removed from the ABA protocol when body weight reached 75% of entry weight (starvation criterion) or after 14 days (survival criterion). LP-A rats starved faster (6.44 ± 0.24 days) than LP-C animals (8.00 ± 0.29 days); all OP rats survived the ABA challenge. LP-A rats lost weight faster than animals in all other groups (P < 0.001). Consistent with the starvation results, LP-A rats increased the rate of wheel running more rapidly than LP-C rats (P = 0.001), with no difference in hypothalamic and primary neural reward serotonin levels. In contrast, OP-A rats showed suppression of wheel running compared with the OP-C group (days 6–14 of ABA challenge, P < 0.001) and decreased hypothalamic and neural reward serotonin levels (P < 0.01). Thus there is an interrelationship between cannabinoid-1 receptor and ObR pathways in regulation of energy balance and physical activity. Effective clinical measures to prevent and treat a variety of disorders will require understanding of the mechanisms underlying these effects. PMID:24903921

  2. The influence of parachute-resisted sprinting on running mechanics in collegiate track athletes.

    PubMed

    Paulson, Sally; Braun, William A

    2011-06-01

    The influence of parachute-resisted sprinting on running mechanics in collegiate track athletes. The aim of this investigation was to compare the acute effects of parachute-resisted (PR) sprinting on selected kinematic variables. Twelve collegiate sprinters (mean age 19.58 ± 1.44 years, mass 69.32 ± 14.38 kg, height 1.71 ± 9.86 m) ran a 40-yd dash under 2 conditions: PR sprint and sprint without a parachute (NC) that were recorded on a video computer system (60 Hz). Sagittal plane kinematics of the right side of the body was digitized to calculate joint angles at initial ground contact (IGC) and end ground contact (EGC), ground contact (GC) time, stride rate (SR), stride length (SL), and the times of the 40-yd dashes. The NC 40-yd dash time was significantly faster than the PR trial (p < 0.05). The shoulder angle at EGC significantly increased from 34.10 to 42.10° during the PR trial (p < 0.05). There were no significant differences in GC time, SR, SL, or the other joint angles between the 2 trials (p > 0.05). This study suggests that PR sprinting does not acutely affect GC time, SR, SL and upper extremity or lower extremity joint angles during weight acceptance (IGC) in collegiate sprinters. However, PR sprinting increased shoulder flexion by 23.5% at push-off and decreased speed by 4.4%. While sprinting with the parachute, the athlete's movement patterns resembled their mechanics during the unloaded condition. This indicates the external load caused by PR did not substantially overload the runner, and only caused a minor change in the shoulder during push-off. This sports-specific training apparatus may provide coaches with another method for training athletes in a sports-specific manner without causing acute changes to running mechanics.

  3. Modal analysis and dynamic stresses for acoustically excited shuttle insulation tiles

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Ogilvie, P. L.

    1975-01-01

    Improvements and extensions to the RESIST computer program developed for determining the normalized modal stress response of shuttle insulation tiles are described. The new version of RESIST can accommodate primary structure panels with closed-cell stringers, in addition to the capability for treating open-cell stringers. In addition, the present version of RESIST numerically solves vibration problems several times faster than its predecessor. A new digital computer program, titled ARREST (Acoustic Response of Reusable Shuttle Tiles) is also described. Starting with modal information contained on output tapes from RESIST computer runs, ARREST determines RMS stresses, deflections and accelerations of shuttle panels with reusable surface insulation tiles. Both programs are applicable to stringer stiffened structural panels with or without reusable surface insulation titles.

  4. Pattern Discovery and Change Detection of Online Music Query Streams

    NASA Astrophysics Data System (ADS)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  5. An Optical Model for Estimating the Underwater Light Field from Remote Sensing

    NASA Technical Reports Server (NTRS)

    Liu, Cheng-Chien; Miller, Richard L.

    2002-01-01

    A model of the wavelength-integrated scalar irradiance for a vertically homogeneous water column is developed. It runs twenty thousand times faster than simulations obtained using full Hydrolight code and limits the percentage error to less than 3.7%. Both the distribution of incident sky radiance and a wind-roughened surface are integrated in the model. Our model removes common limitations of earlier models and can be applied to waters with any composition of the inherent optical properties. Implementation of this new model, as well as the ancillary information required for processing global-scale satellite data, is discussed. This new model is fast, accurate, and flexible and therefore provides important information of the underwater light field from remote sensing.

  6. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks.

    PubMed

    Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena

    2014-01-01

    Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.

  7. An Accurate and Efficient Algorithm for Detection of Radio Bursts with an Unknown Dispersion Measure, for Single-dish Telescopes and Interferometers

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-01-01

    Astronomical radio signals are subjected to phase dispersion while traveling through the interstellar medium. To optimally detect a short-duration signal within a frequency band, we have to precisely compensate for the unknown pulse dispersion, which is a computationally demanding task. We present the “fast dispersion measure transform” algorithm for optimal detection of such signals. Our algorithm has a low theoretical complexity of 2{N}f{N}t+{N}t{N}{{Δ }}{{log}}2({N}f), where Nf, Nt, and NΔ are the numbers of frequency bins, time bins, and dispersion measure bins, respectively. Unlike previously suggested fast algorithms, our algorithm conserves the sensitivity of brute-force dedispersion. Our tests indicate that this algorithm, running on a standard desktop computer and implemented in a high-level programming language, is already faster than the state-of-the-art dedispersion codes running on graphical processing units (GPUs). We also present a variant of the algorithm that can be efficiently implemented on GPUs. The latter algorithm’s computation and data-transport requirements are similar to those of a two-dimensional fast Fourier transform, indicating that incoherent dedispersion can now be considered a nonissue while planning future surveys. We further present a fast algorithm for sensitive detection of pulses shorter than the dispersive smearing limits of incoherent dedispersion. In typical cases, this algorithm is orders of magnitude faster than enumerating dispersion measures and coherently dedispersing by convolution. We analyze the computational complexity of pulsed signal searches by radio interferometers. We conclude that, using our suggested algorithms, maximally sensitive blind searches for dispersed pulses are feasible using existing facilities. We provide an implementation of these algorithms in Python and MATLAB.

  8. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  9. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  10. Structator: fast index-based search for RNA sequence-structure patterns

    PubMed Central

    2011-01-01

    Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640

  11. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.

  12. Effects of dietary antioxidants on training and performance in female runners.

    PubMed

    Braakhuis, Andrea J; Hopkins, Will G; Lowe, Tim E

    2014-01-01

    Exercise-induced oxidative stress is implicated in muscle damage and fatigue which has led athletes to embark on antioxidant supplementation regimes to negate these effects. This study investigated the intake of vitamin C (VC) (1 g), blackcurrant (BC) juice (15 mg VC, 300 mg anthocyanins) and placebo in isocaloric drink form on training progression, incremental running test and 5-km time-trial performance. Twenty-three trained female runners (age, 31 ± 8 y; mean ± SD) completed three blocks of high-intensity training for 3 wks and 3 days, separated by a washout (~3.7 wks). Changes in training and performance with each treatment were analysed with a mixed linear model, adjusting for performance at the beginning of each training block. Markers of oxidative status included protein carbonyl, malondialdehyde (in plasma and in vitro erythrocytes), ascorbic acid, uric acid and erythrocyte enzyme activity of superoxide dismutase, catalase and glutathione peroxidase were analysed. There was a likely harmful effect on mean running speed during training when taking VC (1.3%; 90% confidence limits ±1.3%). Effects of the two treatments relative to placebo on mean performance in the incremental test and time trial were unclear, but runners faster by 1 SD of peak speed demonstrated a possible improvement on peak running speed with BC juice (1.9%; ±2.5%). Following VC, certain oxidative markers were elevated: catalase at rest (23%; ±21%), protein carbonyls at rest (27%; ±38%) and superoxide dismutase post-exercise (8.3%; ±9.3%). In conclusion, athletes should be cautioned about taking VC chronically, however, BC may improve performance in the elite.

  13. The physiological basis of bird flight

    PubMed Central

    Butler, Patrick J.

    2016-01-01

    Flapping flight is energetically more costly than running, although it is less costly to fly a given body mass a given distance per unit time than it is for a similar mass to run the same distance per unit time. This is mainly because birds can fly faster than they can run. Oxygen transfer and transport are enhanced in migrating birds compared with those in non-migrators: at the gas-exchange regions of the lungs the effective area is greater and the diffusion distance smaller. Also, migrating birds have larger hearts and haemoglobin concentrations in the blood, and capillary density in the flight muscles tends to be higher. Species like bar-headed geese migrate at high altitudes, where the availability of oxygen is reduced and the energy cost of flapping flight increased compared with those at sea level. Physiological adaptations to these conditions include haemoglobin with a higher affinity for oxygen than that in lowland birds, a greater effective ventilation of the gas-exchange surface of the lungs and a greater capillary-to-muscle fibre ratio. Migrating birds use fatty acids as their source of energy, so they have to be transported at a sufficient rate to meet the high demand. Since fatty acids are insoluble in water, birds maintain high concentrations of fatty acid–binding proteins to transport fatty acids across the cell membrane and within the cytoplasm. The concentrations of these proteins, together with that of a key enzyme in the β-oxidation of fatty acids, increase before migration. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528774

  14. Comparison of approaches for mobile document image analysis using server supported smartphones

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.

  15. Broadband for Rural America: Economic Impacts and Economic Opportunities. Economic Policy/Briefing Paper

    ERIC Educational Resources Information Center

    Kuttner, Hanns

    2012-01-01

    Historically, waves of new technologies have brought Americans higher standards of living. Electrical service and hot and cold running water, for example, were once luxuries; now their absence makes a home substandard. Today, technologies for accessing the Internet are diffusing at an even faster rate than those earlier innovations once did,…

  16. Physical Performance in Elite Male and Female Team Handball Players.

    PubMed

    Wagner, Herbert; Fuchs, Patrick; Fusco, Andrea; Fuchs, Philip; Bell, W Jeffrey; Duvillard, Serge P

    2018-06-12

    Biological differences between men and women are well known; however, literature-addressing knowledge about the influence of sex to specific and general performance in team handball is almost nonexistent. Consequently, the aim of the study was to assess and compare specific and general physical performance in male and female elite team handball players, to determine if the differences are consequential for general compared to specific physical performance characteristics and the relationship between general and specific physical performance. Twelve male and ten female elite team handball players performed a game based performance test, upper- und lower-body strength and power tests, a sprinting test, and an incremental treadmill-running test. Significant differences (P<.05) between male and female players were found for peak oxygen uptake and total running time during the treadmill test, 30m sprinting time, leg extension strength, trunk and shoulder rotation torque, counter movement jump height as well as offense and defense time, ball velocity and jump height in the game based performance test. An interaction (sex × test) was found for time and oxygen uptake and except shoulder rotation torque and ball velocity in females, we found only a low relationship between specific and general physical performance. The results of the study revealed that male players are heavier, taller, faster, stronger, jump higher and have a better aerobic performance. However, female players performed relatively better in the team handball specific tests compared to the general tests. Our findings also suggest that female players should focus more on strength training.

  17. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    PubMed

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  18. Investigating the Use of the Intel Xeon Phi for Event Reconstruction

    NASA Astrophysics Data System (ADS)

    Sherman, Keegan; Gilfoyle, Gerard

    2014-09-01

    The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. The physics goal of Jefferson Lab is to understand how quarks and gluons form nuclei and it is being upgraded to a higher, 12-GeV beam energy. The new CLAS12 detector in Hall B will collect 5-10 terabytes of data per day and will require considerable computing resources. We are investigating tools, such as the Intel Xeon Phi, to speed up the event reconstruction. The Kalman Filter is one of the methods being studied. It is a linear algebra algorithm that estimates the state of a system by combining existing data and predictions of those measurements. The tools required to apply this technique (i.e. matrix multiplication, matrix inversion) are being written using C++ intrinsics for Intel's Xeon Phi Coprocessor, which uses the Many Integrated Cores (MIC) architecture. The Intel MIC is a new high-performance chip that connects to a host machine through the PCIe bus and is built to run highly vectorized and parallelized code making it a well-suited device for applications such as the Kalman Filter. Our tests of the MIC optimized algorithms needed for the filter show significant increases in speed. For example, matrix multiplication of 5x5 matrices on the MIC was able to run up to 69 times faster than the host core. Work supported by the University of Richmond and the US Department of Energy.

  19. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less

  20. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.

  1. Secured Hash Based Burst Header Authentication Design for Optical Burst Switched Networks

    NASA Astrophysics Data System (ADS)

    Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.

    2017-12-01

    The optical burst switching (OBS) is a promising technology that could meet the fast growing network demand. They are featured with the ability to meet the bandwidth requirement of applications that demand intensive bandwidth. OBS proves to be a satisfactory technology to tackle the huge bandwidth constraints, but suffers from security vulnerabilities. The objective of this proposed work is to design a faster and efficient burst header authentication algorithm for core nodes. There are two important key features in this work, viz., header encryption and authentication. Since the burst header is an important in optical burst switched network, it has to be encrypted; otherwise it is be prone to attack. The proposed MD5&RC4-4S based burst header authentication algorithm runs 20.75 ns faster than the conventional algorithms. The modification suggested in the proposed RC4-4S algorithm gives a better security and solves the correlation problems between the publicly known outputs during key generation phase. The modified MD5 recommended in this work provides 7.81 % better avalanche effect than the conventional algorithm. The device utilization result also shows the suitability of the proposed algorithm for header authentication in real time applications.

  2. Parallel high-precision orbit propagation using the modified Picard-Chebyshev method

    NASA Astrophysics Data System (ADS)

    Koblick, Darin C.

    2012-03-01

    The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12th order Runge-Kutta-Nystrom method. For this study, the evaluation of the accuracy and computational time of the modified Picard-Chebyshev method, using the Java Astrodynamics Toolkit high-precision force model, is conducted to assess its runtime performance. Simulation results of the modified Picard-Chebyshev method, implemented in MATLAB and the MATLAB Parallel Computing Toolbox, are compared against the most efficient first and second order Ordinary Differential Equation (ODE) solvers. A total of six processors were used to assess the runtime performance of the modified Picard-Chebyshev method. It was found that for all orbit propagation test cases, where the gravity model was simulated to be of higher degree and order (above 225 to increase computational overhead), the modified Picard-Chebyshev method was faster, by as much as a factor of two, than the other ODE solvers which were tested.

  3. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

    PubMed

    Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian

    2017-06-01

    State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

  4. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  5. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  6. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  7. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  8. GPU Accelerated Chemical Similarity Calculation for Compound Library Comparison

    PubMed Central

    Ma, Chao; Wang, Lirong; Xie, Xiang-Qun

    2012-01-01

    Chemical similarity calculation plays an important role in compound library design, virtual screening, and “lead” optimization. In this manuscript, we present a novel GPU-accelerated algorithm for all-vs-all Tanimoto matrix calculation and nearest neighbor search. By taking advantage of multi-core GPU architecture and CUDA parallel programming technology, the algorithm is up to 39 times superior to the existing commercial software that runs on CPUs. Because of the utilization of intrinsic GPU instructions, this approach is nearly 10 times faster than existing GPU-accelerated sparse vector algorithm, when Unity fingerprints are used for Tanimoto calculation. The GPU program that implements this new method takes about 20 minutes to complete the calculation of Tanimoto coefficients between 32M PubChem compounds and 10K Active Probes compounds, i.e., 324G Tanimoto coefficients, on a 128-CUDA-core GPU. PMID:21692447

  9. Quantum partial search for uneven distribution of multiple target items

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Korepin, Vladimir

    2018-06-01

    Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.

  10. Wheel-running reinforcement in free-feeding and food-deprived rats.

    PubMed

    Belke, Terry W; Pierce, W David

    2016-03-01

    Rats experiencing sessions of 30min free access to wheel running were assigned to ad-lib and food-deprived groups, and given additional sessions of free wheel activity. Subsequently, both ad-lib and deprived rats lever pressed for 60s of wheel running on fixed ratio (FR) 1, variable ratio (VR) 3, VR 5, and VR 10 schedules, and on a response-initiated variable interval (VI) 30s schedule. Finally, the ad-lib rats were switched to food deprivation and the food-deprived rats were switched to free food, as rats continued responding on the response-initiated VI 30-s schedule. Wheel running functioned as reinforcement for both ad-lib and food-deprived rats. Food-deprived rats, however, ran faster and had higher overall lever-pressing rates than free-feeding rats. On the VR schedules, wheel-running rates positively correlated with local and overall lever pressing rates for deprived, but not ad-lib rats. On the response-initiated VI 30s schedule, wheel-running rates and lever-pressing rates changed for ad-lib rats switched to food deprivation, but not for food-deprived rats switched to free-feeding. The overall pattern of results suggested different sources of control for wheel running: intrinsic motivation, contingencies of automatic reinforcement, and food-restricted wheel running. An implication is that generalizations about operant responding for wheel running in food-deprived rats may not extend to wheel running and operant responding of free-feeding animals. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The influence of the magnetic field on running penumbral waves in the solar chromosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jess, D. B.; Reznikova, V. E.; Van Doorsselaere, T.

    2013-12-20

    We use images of high spatial and temporal resolution, obtained using both ground- and space-based instrumentation, to investigate the role magnetic field inclination angles play in the propagation characteristics of running penumbral waves in the solar chromosphere. Analysis of a near-circular sunspot, close to the center of the solar disk, reveals a smooth rise in oscillatory period as a function of distance from the umbral barycenter. However, in one directional quadrant, corresponding to the north direction, a pronounced kink in the period-distance diagram is found. Utilizing a combination of the inversion of magnetic Stokes vectors and force-free field extrapolations, wemore » attribute this behavior to the cut-off frequency imposed by the magnetic field geometry in this location. A rapid, localized inclination of the magnetic field lines in the north direction results in a faster increase in the dominant periodicity due to an accelerated reduction in the cut-off frequency. For the first time, we reveal how the spatial distribution of dominant wave periods, obtained with one of the highest resolution solar instruments currently available, directly reflects the magnetic geometry of the underlying sunspot, thus opening up a wealth of possibilities in future magnetohydrodynamic seismology studies. In addition, the intrinsic relationships we find between the underlying magnetic field geometries connecting the photosphere to the chromosphere, and the characteristics of running penumbral waves observed in the upper chromosphere, directly supports the interpretation that running penumbral wave phenomena are the chromospheric signature of upwardly propagating magneto-acoustic waves generated in the photosphere.« less

  12. A comparison of anthropometric and training characteristics between recreational female marathoners and recreational female Ironman triathletes.

    PubMed

    Rüst, Christoph Alexander; Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas

    2013-02-28

    A personal best marathon time has been reported as a strong predictor variable for an Ironman race time in recreational female Ironman triathletes. This raises the question whether recreational female Ironman triathletes are similar to recreational female marathoners. We investigated similarities and differences in anthropometry and training between 53 recreational female Ironman triathletes and 46 recreational female marathoners. The association of anthropometric variables and training characteristics with race time was investigated using bi- and multi-variate analysis. The Ironman triathletes were younger (P < 0.01), had a lower skin-fold thickness at pectoral (P < 0.001), axillar (P < 0.01), and subscapular (P < 0.05) site, but a thicker skin-fold thickness at the calf site (P < 0.01) compared to the marathoners. Overall weekly training hours were higher in the Ironman triathletes (P < 0.001). The triathletes were running faster during training than the marathoners (P < 0.05). For the triathletes, neither an anthropometric nor a training variable showed an association with overall Ironman race time after bi-variate analysis. In the multi-variate analysis, running speed during training was related to marathon split time for the Ironman triathletes (P = 0.01) and to marathon race time for the marathoners (P = 0.01). To conclude, although personal best marathon time is a strong predictor variable for performance in recreational female Ironman triathletes, there are differences in both anthropometry and training between recreational female Ironman triathletes and recreational female marathoners and different predictor variables for race performance in these two groups of athletes. These findings suggest that recreational female Ironman triathletes are not comparable to recreational female marathoners regarding the association between anthropometric and training characteristics with race time.

  13. Aggregated Indexing of Biomedical Time Series Data

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.

    2016-01-01

    Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298

  14. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  15. Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras.

    PubMed

    Malek, Salim; Melgani, Farid; Mekhalfi, Mohamed Lamine; Bazi, Yakoub

    2017-11-16

    This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application.

  16. Status and future plans for open source QuickPIC

    NASA Astrophysics Data System (ADS)

    An, Weiming; Decyk, Viktor; Mori, Warren

    2017-10-01

    QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git

  17. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    PubMed

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  18. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  19. Running Performance With Nutritive and Nonnutritive Sweetened Mouth Rinses.

    PubMed

    Hawkins, Keely R; Krishnan, Sridevi; Ringos, Lara; Garcia, Vanessa; Cooper, Jamie A

    2017-09-01

    Using mouth rinse (MR) with carbohydrate during exercise has been shown to act as an ergogenic aid. To investigate if nutritive or nonnutritive sweetened MR affects exercise performance and to assess the influence of sweetness intensity on endurance performance during a time trial (TT). This randomized, single-blinded study had 4 treatment conditions. Sixteen subjects (9 men, 7 women) completed a 12.8-km TT 4 different times. During each TT, subjects mouth-rinsed and expectorated a different solution at time 0 and every 12.5% of the TT. The 4 MR solutions were sucrose (S) (sweet taste and provides energy of 4 kcal/g), a lower-intensity sucralose (S1:1) (artificial sweetener that provides no energy but tastes sweet), a higher-intensity sucralose (S100:1), and water as control (C). Completion times for each TT, heart rate (HR), and ratings of perceived exertion (RPE) were also recorded. Completion time for S was faster than for C (1:03:47 ± 00:02:17 vs 1:06:56 ± 00:02:18, respectively; P < .001) and showed a trend to be faster vs S100:1 (1:03:47 ± 00:02:17 vs 1:05:38 ± 00:02:12, respectively; P = .07). No other TT differences were found. Average HR showed a trend to be higher for S vs C (P = .08). The only difference in average or maximum RPE was for higher maximum RPE in C vs S1:1 (P = .02). A sweet-tasting MR did improve endurance performance compared with water in a significant manner (mean 4.5% improvement; 3+ min.); however, the presence of energy in the sweet MR appeared necessary since the artificial sweeteners did not improve performance more than water alone.

  20. Running Performance with Nutritive and Non-Nutritive Sweetened Mouth Rinses

    PubMed Central

    Hawkins, Keely H.; Krishnan, Sridevi; Ringos, Lara; Garcia, Vanessa; Cooper, Jamie A.

    2017-01-01

    Mouth rinsing (MR) with carbohydrate during exercise has been shown to act as an ergogenic aid. Purpose To investigate if nutritive or nonnutritive sweetened MR affect exercise performance, and to assess the influence of sweetness intensity on endurance performance during a time-trial (TT). Methods This randomized, single blinded study had 4 treatment conditions. 16 subjects (9 men, 7 women) completed a 12.8km TT four different times. During each TT, subjects MR and expectorated a different solution at time 0 and every 12.5% of the TT. The 4 MR solutions were: sucrose (S) (sweet taste and provides energy of 4 kcals/g), a lower intensity sucralose (S1:1) (artificial sweetener that provides no energy but tastes sweet), a higher intensity sucralose (S100:1), and water as control (C). Completion times for each TT, heart rate (HR) and ratings of perceived exertion (RPE) were also recorded. Results Completion time for S was faster than C (1:03:47±00:02:17 vs. 1:06:56±00:02:18; p<0.001, respectively), and showed a trend to be faster vs. S100:1 (1:03:47±00:02:17 vs. 1:05:38±00:02:12; p=0.07, respectively). No other TT differences were found. Average HR showed a trend to be higher for S vs. C (p=0.08). There only differences in average or max RPE was for higher max RPE in C vs. S1:1 (p=0.02). Conclusion A sweet tasting MR did improve endurance performance compared to water in a significant manner (avg. 4.5% improvement; 3+ min.); however, the presence of energy in the sweet MR appeared necessary since the artificial sweeteners did not improve performance more than water alone. PMID:28095077

  1. Performance and sex differences in 'Isklar Norseman Xtreme Triathlon'.

    PubMed

    Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Stiefel, Michael; Rosemann, Thomas; Rüst, Christoph Alexander

    2016-10-31

    The performance and sex differences of long-distance triathletes competing in 'Ironman Hawaii' are well investigated. However, less information is available with regards to triathlon races of the Ironman distance held under extreme environmental conditions (e.g. extreme cold) such as the 'Isklar Norseman Xtreme Triathlon' which started in 2003. In 'Isklar Norseman Xtreme Triathlon', athletes swim at a water temperature of ~13-15°C, cycle at temperatures of ~5-20°C and run at temperatures of ~12-28°C in the valley and of ~2-12°C at Mt. Gaustatoppen. This study analysed the performance trends and sex differences in 'Isklar Norseman Xtreme Triathlon' held from 2003 to 2015 using mixed-effects regression analyses. During this period, a total of 175 women (10.6%) and 1,852 men (89.4%) successfully finished the race. The number of female (r² = 0.53, P = 0.0049) and male (r² = 0.37, P = 0.0271) finishers increased and the men-to-women ratio decreased (r² = 0.86, P < 0.0001). Men were faster than women in cycling (25.41 ± 2.84 km/h versus 24.25 ± 2.17 km/h) (P < 0.001), but not in swimming (3.06 ± 0.62 km/h vs. 2.94 ± 0.57 km/h), running (7.43 ± 1.13 km/h vs. 7.31 ± 0.93 km/h) and overall race time (874.57 ± 100.62 min vs. 899.95 ± 90.90 min) (P > 0.05). Across years, women improved in swimming and both women and men improved in cycling and in overall race time (P < 0.001). In running, however, neither women nor men improved (P > 0.05). In summary, in 'Isklar Norseman Xtreme Triathlon' from 2003 to 2015, the number of successful women increased across years, women achieved a similar performance to men in swimming, cycling and overall race time, and women improved across years in swimming, cycling and overall race time.

  2. POTS to broadband ... cable modems.

    PubMed

    Kabachinski, Jeff

    2003-01-01

    There have been 3 columns talking about broadband communications and now at the very end when it's time to compare using a telco or cableco, I'm asking does it really matter? So what if I can actually get the whole 30 Mbps with a cable network when the website I'm connecting to is running on an ISDN line at 128 Kbps? Broadband offers a lot more bandwidth than the connections many Internet servers have today. Except for the biggest websites, many servers connect to the Internet with a switched 56-Kbps, ISDN, or fractional T1 line. Even with the big websites, my home network only runs a 10 Mbps Ethernet connection to my cable modem. Maybe it doesn't matter that the cable lines are shared or that I can only get 8 Mbps from an ADSL line. Maybe the ISP that I use has a T1 line connection to the Internet so my new ADSL modem has a fatter pipe than my provider! (See table 1). It all makes me wonder what's in store for us in the future. PC technology has increased exponentially in the last 10 years with super fast processor speeds, hard disks of hundreds of gigabytes, and amazing video and audio. Internet connection speeds have failed to keep the same pace. Instead of hundreds of times better or faster--modem speeds are barely 10 times faster. Broadband connections offer some additional speed but still not comparable growth as broadband connections are still in their infancy. Rather than trying to make use of existing communication paths, maybe we need a massive infrastructure makeover of something new. How about national wireless access points so we can connect anywhere, anytime? To use the latest and fastest wireless technology you will simply need to buy another $9.95 WLAN card or download the latest super slick WLAN compression/encryption software. Perhaps it is time for a massive infra-restructuring. Consider the past massive infrastructure efforts. The telcos needed to put in their wiring infrastructure starting in the 1870s before telephones were useful to the masses. CATV was a minor player in the TV broadcast business before they installed their cabling infrastructure and went national. Even automobiles were fairly useless until roads were paved and the highway infrastructure was built!

  3. Relaxation processes in a low-order three-dimensional magnetohydrodynamics model

    NASA Technical Reports Server (NTRS)

    Stribling, Troy; Matthaeus, William H.

    1991-01-01

    The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.

  4. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    PubMed Central

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  5. Speed versus accuracy in decision-making ants: expediting politics and policy implementation.

    PubMed

    Franks, Nigel R; Dechaume-Moncharmont, François-Xavier; Hanmore, Emma; Reynolds, Jocelyn K

    2009-03-27

    Compromises between speed and accuracy are seemingly inevitable in decision-making when accuracy depends on time-consuming information gathering. In collective decision-making, such compromises are especially likely because information is shared to determine corporate policy. This political process will also take time. Speed-accuracy trade-offs occur among house-hunting rock ants, Temnothorax albipennis. A key aspect of their decision-making is quorum sensing in a potential new nest. Finding a sufficient number of nest-mates, i.e. a quorum threshold (QT), in a potential nest site indicates that many ants find it suitable. Quorum sensing collates information. However, the QT is also used as a switch, from recruitment of nest-mates to their new home by slow tandem running, to recruitment by carrying, which is three times faster. Although tandem running is slow, it effectively enables one successful ant to lead and teach another the route between the nests. Tandem running creates positive feedback; more and more ants are shown the way, as tandem followers become, in turn, tandem leaders. The resulting corps of trained ants can then quickly carry their nest-mates; but carried ants do not learn the route. Therefore, the QT seems to set both the amount of information gathered and the speed of the emigration. Low QTs might cause more errors and a slower emigration--the worst possible outcome. This possible paradox of quick decisions leading to slow implementation might be resolved if the ants could deploy another positive-feedback recruitment process when they have used a low QT. Reverse tandem runs occur after carrying has begun and lead ants back from the new nest to the old one. Here we show experimentally that reverse tandem runs can bring lost scouts into an active role in emigrations and can help to maintain high-speed emigrations. Thus, in rock ants, although quick decision-making and rapid implementation of choices are initially in opposition, a third recruitment method can restore rapid implementation after a snap decision. This work reveals a principle of widespread importance: the dynamics of collective decision-making (i.e. the politics) and the dynamics of policy implementation are sometimes intertwined, and only by analysing the mechanisms of both can we understand certain forms of adaptive organization.

  6. NGScloud: RNA-seq analysis of non-model species using cloud computing.

    PubMed

    Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai

    2018-05-03

    RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.

  7. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  8. FastDart : a fast, accurate and friendly version of DART code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.; Taboada, H.

    2000-11-08

    A new enhanced, visual version of DART code is presented. DART is a mechanistic model based code, developed for the performance calculation and assessment of aluminum dispersion fuel. Major issues of this new version are the development of a new, time saving calculation routine, able to be run on PC, a friendly visual input interface and a plotting facility. This version, available for silicide and U-Mo fuels,adds to the classical accuracy of DART models for fuel performance prediction, a faster execution and visual interfaces. It is part of a collaboration agreement between ANL and CNEA in the area of Lowmore » Enriched Uranium Advanced Fuels, held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy.« less

  9. Multi-core and GPU accelerated simulation of a radial star target imaged with equivalent t-number circular and Gaussian pupils

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2013-09-01

    Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.

  10. Contributions of metabolic and temporal costs to human gait selection.

    PubMed

    Summerside, Erik M; Kram, Rodger; Ahmed, Alaa A

    2018-06-01

    Humans naturally select several parameters within a gait that correspond with minimizing metabolic cost. Much less is understood about the role of metabolic cost in selecting between gaits. Here, we asked participants to decide between walking or running out and back to different gait specific markers. The distance of the walking marker was adjusted after each decision to identify relative distances where individuals switched gait preferences. We found that neither minimizing solely metabolic energy nor minimizing solely movement time could predict how the group decided between gaits. Of our twenty participants, six behaved in a way that tended towards minimizing metabolic energy, while eight favoured strategies that tended more towards minimizing movement time. The remaining six participants could not be explained by minimizing a single cost. We provide evidence that humans consider not just a single movement cost, but instead a weighted combination of these conflicting costs with their relative contributions varying across participants. Individuals who placed a higher relative value on time ran faster than individuals who placed a higher relative value on metabolic energy. Sensitivity to temporal costs also explained variability in an individual's preferred velocity as a function of increasing running distance. Interestingly, these differences in velocity both within and across participants were absent in walking, possibly due to a steeper metabolic cost of transport curve. We conclude that metabolic cost plays an essential, but not exclusive role in gait decisions. © 2018 The Author(s).

  11. Methods for semi-automated indexing for high precision information retrieval.

    PubMed

    Berrios, Daniel C; Cucina, Russell J; Fagan, Lawrence M

    2002-01-01

    To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  12. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    PubMed

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  13. Software framework for the upcoming MMT Observatory primary mirror re-aluminization

    NASA Astrophysics Data System (ADS)

    Gibson, J. Duane; Clark, Dusty; Porter, Dallan

    2014-07-01

    Details of the software framework for the upcoming in-situ re-aluminization of the 6.5m MMT Observatory (MMTO) primary mirror are presented. This framework includes: 1) a centralized key-value store and data structure server for data exchange between software modules, 2) a newly developed hardware-software interface for faster data sampling and better hardware control, 3) automated control algorithms that are based upon empirical testing, modeling, and simulation of the aluminization process, 4) re-engineered graphical user interfaces (GUI's) that use state-of-the-art web technologies, and 5) redundant relational databases for data logging. Redesign of the software framework has several objectives: 1) automated process control to provide more consistent and uniform mirror coatings, 2) optional manual control of the aluminization process, 3) modular design to allow flexibility in process control and software implementation, 4) faster data sampling and logging rates to better characterize the approximately 100-second aluminization event, and 5) synchronized "real-time" web application GUI's to provide all users with exactly the same data. The framework has been implemented as four modules interconnected by a data store/server. The four modules are integrated into two Linux system services that start automatically at boot-time and remain running at all times. Performance of the software framework is assessed through extensive testing within 2.0 meter and smaller coating chambers at the Sunnyside Test Facility. The redesigned software framework helps ensure that a better performing and longer lasting coating will be achieved during the re-aluminization of the MMTO primary mirror.

  14. Foot strike patterns of runners at the 15-km point during an elite-level half marathon.

    PubMed

    Hasegawa, Hiroshi; Yamauchi, Takeshi; Kraemer, William J

    2007-08-01

    There are various recommendations by many coaches regarding foot landing techniques in distance running that are meant to improve running performance and prevent injuries. Several studies have investigated the kinematic and kinetic differences between rearfoot strike (RFS), midfoot strike (MFS), and forefoot strike (FFS) patterns at foot landing and their effects on running efficiency on a treadmill and over ground conditions. However, little is known about the actual condition of the foot strike pattern during an actual road race at the elite level of competition. The purpose of the present study was to document actual foot strike patterns during a half marathon in which elite international level runners, including Olympians, compete. Four hundred fifteen runners were filmed by 2 120-Hz video cameras in the height of 0.15 m placed at the 15.0-km point and obtained sagittal foot landing and taking off images for 283 runners. Rearfoot strike was observed in 74.9% of all analyzed runners, MFS in 23.7%, and FFS in 1.4%. The percentage of MFS was higher in the faster runners group, when all runners were ranked and divided into 50 runner groups at the 15.0-km point of the competition. In the top 50, which included up to the 69th place runner in actual order who passed the 15-km point at 45 minutes, 53 second (this speed represents 5.45 m x s(-1), or 15 minutes, 17 seconds per 5 km), RFS, MFS, and FFS were 62.0, 36.0, and 2.0%, respectively. Contact time (CT) clearly increased for the slower runners, or the placement order increased (r = 0.71, p < or = 0.05). The CT for RFS + FFS for every 50 runners group significantly increased with increase of the placement order. The CT for RFS was significantly longer than MFS + FFS (200.0 +/- 21.3 vs. 183.0 +/- 16 millisecond). Apparent inversion (INV) of the foot at the foot strike was observed in 42% of all runners. The percentage of INV for MFS was higher than for RFS and FFS (62.5, 32.0, and 50%, respectively). The CT with INV for MFS + FFS was significantly shorter than the CT with and without INV for RFS. Furthermore, the CT with INV was significantly shorter than push-off time without INV for RFS. The findings of this study indicate that foot strike patterns are related to running speed. The percentage of RFS increases with the decreasing of the running speed; conversely, the percentage of MFS increases as the running speed increases. A shorter contact time and a higher frequency of inversion at the foot contact might contribute to higher running economy.

  15. An analysis of travel costs on transport of load and nest building in golden hamster.

    PubMed

    Guerra, Rogerio F.; Ades, Cesar

    2002-03-28

    We investigated the effects of travel costs on transporting nest material and nest-building activity in golden hamsters. Nest-deprived animals were submitted to run alleys 30, 90 and 180 cm long to access a source containing paper strips as nest material (Experiment 1) or were submitted to the same travel costs in 24-h experimental sessions (Experiment 2). We noted that increased travel costs were related to a decreased number of trips to the source, larger amounts (cm(2)) of nest material transported per trip (although total loads also decreased in longer alleys), longer intervals between trips, and increased time spent at the source and in nest building activity. Foraging efficiency (i.e. size of load divided by the time spent at the source) decreased as a function of travel costs, and animals transported their loads in two fundamental ways: in 30-cm alleys, they simply used their mouth to pull the paper strips, but in 90- or 180-cm alleys they transported the loads in their cheek pouches. The animals were faster when returning to the home-cage and their running speed (cm/s) increased as a function of the length of the alley, showing that animals are under different environmental pressures when searching for resources and subsequently running back with the load to the nest. Both male and female subjects were sensitive to travel costs, but males engaged in nest building activity more promptly and exhibited higher mean performances in most measures. We conclude that nest material is a good reinforcer, and our major results are in accordance with the predictions of microeconomic and optimal foraging theories.

  16. Comparison of self-administration behavior and responsiveness to drug-paired cues in rats running an alley for intravenous heroin and cocaine.

    PubMed

    Su, Zu-In; Wenzel, Jennifer; Baird, Rebeccah; Ettenberg, Aaron

    2011-04-01

    Evidence suggests that responsiveness to a drug-paired cue is predicted by the reinforcing magnitude of the drug during prior self-administration. It remains unclear, however, if this principle holds true when comparisons are made across drug reinforcers. The current study was therefore devised to test the hypothesis that differences in the animals' responsiveness to a cocaine- or heroin-paired cue presented during extinction would reflect differences in the patterns of prior cocaine and heroin runway self-administration. Rats ran a straight alley for single intravenous injections of either heroin (0.1 mg/kg/inj) or cocaine (1.0 mg/kg/inj) each paired with a distinct olfactory cue. Animals experienced 15 trials with each drug reinforcer in a counterbalanced manner. Start latencies, run times, and retreat behaviors (a form of approach-avoidance conflict) provided behavioral indices of the subjects' motivation to seek the reinforcer on each trial. Responsiveness to each drug-paired cue was assessed after 7, 14, or 21 days of non-reinforced extinction trials. Other animals underwent conditioned place preference (CPP) testing to ensure that the two drug reinforcers were capable of producing drug-cue associations. While both drugs produced comparable CPPs, heroin served as a stronger incentive stimulus in the runway as evidenced by faster start and run times and fewer retreats. In contrast, cocaine- but not heroin-paired cues produced increases in drug-seeking behavior during subsequent extinction trials. The subjects' responsiveness to drug-paired cues during extinction was not predicted by differences in the motivation to seek heroin versus cocaine during prior drug self-administration.

  17. Increase in Leg Stiffness Reduces Joint Work During Backpack Carriage Running at Slow Velocities.

    PubMed

    Liew, Bernard; Netto, Kevin; Morris, Susan

    2017-10-01

    Optimal tuning of leg stiffness has been associated with better running economy. Running with a load is energetically expensive, which could have a significant impact on athletic performance where backpack carriage is involved. The purpose of this study was to investigate the impact of load magnitude and velocity on leg stiffness. We also explored the relationship between leg stiffness and running joint work. Thirty-one healthy participants ran overground at 3 velocities (3.0, 4.0, 5.0 m·s -1 ), whilst carrying 3 load magnitudes (0%, 10%, 20% weight). Leg stiffness was derived using the direct kinetic-kinematic method. Joint work data was previously reported in a separate study. Linear models were used to establish relationships between leg stiffness and load magnitude, velocity, and joint work. Our results found that leg stiffness did not increase with load magnitude. Increased leg stiffness was associated with reduced total joint work at 3.0 m·s -1 , but not at faster velocities. The association between leg stiffness and joint work at slower velocities could be due to an optimal covariation between skeletal and muscular components of leg stiffness, and limb attack angle. When running at a relatively comfortable velocity, greater leg stiffness may reflect a more energy efficient running pattern.

  18. New insights into faster computation of uncertainties

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-11-01

    Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.

  19. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  20. Application of Multiplexed Replica Exchange Molecular Dynamics to the UNRES Force Field: Tests with alpha and alpha+beta Proteins.

    PubMed

    Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A

    2009-03-10

    The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster.

  1. Application of Multiplexed Replica Exchange Molecular Dynamics to the UNRES Force Field: Tests with α and α+β Proteins

    PubMed Central

    Czaplewski, Cezary; Kalinowski, Sebastian; Liwo, Adam; Scheraga, Harold A.

    2009-01-01

    The replica exchange (RE) method is increasingly used to improve sampling in molecular dynamics (MD) simulations of biomolecular systems. Recently, we implemented the united-residue UNRES force field for mesoscopic MD. Initial results from UNRES MD simulations show that we are able to simulate folding events that take place in a microsecond or even a millisecond time scale. To speed up the search further, we applied the multiplexing replica exchange molecular dynamics (MREMD) method. The multiplexed variant (MREMD) of the RE method, developed by Rhee and Pande, differs from the original RE method in that several trajectories are run at a given temperature. Each set of trajectories run at a different temperature constitutes a layer. Exchanges are attempted not only within a single layer but also between layers. The code has been parallelized and scales up to 4000 processors. We present a comparison of canonical MD, REMD, and MREMD simulations of protein folding with the UNRES force-field. We demonstrate that the multiplexed procedure increases the power of replica exchange MD considerably and convergence of the thermodynamic quantities is achieved much faster. PMID:20161452

  2. An SNP within the Angiotensin-Converting Enzyme Distinguishes between Sprint and Distance Performing Alaskan Sled Dogs in a Candidate Gene Analysis

    PubMed Central

    Huson, Heather J.; Byers, Alexandra M.; Runstadler, Jonathan

    2011-01-01

    The Alaskan sled dog offers a unique mechanism for studying the genetics of elite athletic performance. They are a group of mixed breed dogs, comprised of multiple common breeds, and a unique breed entity seen only as a part of the sled dog mix. Alaskan sled dogs are divided into 2 primary groups as determined by their racing skills. Distance dogs are capable of running over 1000 miles in 10 days, whereas sprint dogs run much shorter distances, approximately 30 miles, but in faster times, that is, 18–25 mph. Finding the genes that distinguish these 2 types of performers is likely to illuminate genetic contributors to human athletic performance. In this study, we tested for association between polymorphisms in 2 candidate genes; angiotensin-converting enzyme (ACE) and myostatin (MSTN) and enhanced speed and endurance performance in 174 Alaskan sled dogs. We observed 81 novel genetic variants within the ACE gene and 4 within the MSTN gene, including a polymorphism within the ACE gene that significantly (P value 2.38 × 10−5) distinguished the sprint versus distance populations. PMID:21846742

  3. Falco: a quick and flexible single-cell RNA-seq processing framework on the cloud.

    PubMed

    Yang, Andrian; Troup, Michael; Lin, Peijie; Ho, Joshua W K

    2017-03-01

    Single-cell RNA-seq (scRNA-seq) is increasingly used in a range of biomedical studies. Nonetheless, current RNA-seq analysis tools are not specifically designed to efficiently process scRNA-seq data due to their limited scalability. Here we introduce Falco, a cloud-based framework to enable paralellization of existing RNA-seq processing pipelines using big data technologies of Apache Hadoop and Apache Spark for performing massively parallel analysis of large scale transcriptomic data. Using two public scRNA-seq datasets and two popular RNA-seq alignment/feature quantification pipelines, we show that the same processing pipeline runs 2.6-145.4 times faster using Falco than running on a highly optimized standalone computer. Falco also allows users to utilize low-cost spot instances of Amazon Web Services, providing a ∼65% reduction in cost of analysis. Falco is available via a GNU General Public License at https://github.com/VCCRI/Falco/. j.ho@victorchang.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Mobility for GCSS-MC through virtual PCs

    DTIC Science & Technology

    2017-06-01

    their productivity. Mobile device access to GCSS-MC would allow Marines to access a required program for their mission using a form of computing ...network throughput applications with a device running on various operating systems with limited computational ability. The use of VPCs leads to a...reduced need for network throughput and faster overall execution. 14. SUBJECT TERMS GCSS-MC, enterprise resource planning, virtual personal computer

  5. A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma

    NASA Astrophysics Data System (ADS)

    Kawazura, Y.; Barnes, M.

    2018-05-01

    This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.

  6. A Comparison of the Energetic Cost of Running in Marathon Racing Shoes.

    PubMed

    Hoogkamer, Wouter; Kipp, Shalaya; Frank, Jesse H; Farina, Emily M; Luo, Geng; Kram, Rodger

    2018-04-01

    Reducing the energetic cost of running seems the most feasible path to a sub-2-hour marathon. Footwear mass, cushioning, and bending stiffness each affect the energetic cost of running. Recently, prototype running shoes were developed that combine a new highly compliant and resilient midsole material with a stiff embedded plate. The aim of this study was to determine if, and to what extent, these newly developed running shoes reduce the energetic cost of running compared with established marathon racing shoes. 18 high-caliber athletes ran six 5-min trials (three shoes × two replicates) in prototype shoes (NP), and two established marathon shoes (NS and AB) during three separate sessions: 14, 16, and 18 km/h. We measured submaximal oxygen uptake and carbon dioxide production during minutes 3-5 and averaged energetic cost (W/kg) for the two trials in each shoe model. Compared with the established racing shoes, the new shoes reduced the energetic cost of running in all 18 subjects tested. Averaged across all three velocities, the energetic cost for running in the NP shoes (16.45 ± 0.89 W/kg; mean ± SD) was 4.16 and 4.01% lower than in the NS and AB shoes, when shoe mass was matched (17.16 ± 0.92 and 17.14 ± 0.97 W/kg, respectively, both p < 0.001). The observed percent changes were independent of running velocity (14-18 km/h). The prototype shoes lowered the energetic cost of running by 4% on average. We predict that with these shoes, top athletes could run substantially faster and achieve the first sub-2-hour marathon.

  7. A Methodological Report: Adapting the 505 Change-of-Direction Speed Test Specific to American Football.

    PubMed

    Lockie, Robert G; Farzad, Jalilvand; Orjalo, Ashley J; Giuliano, Dominic V; Moreno, Matthew R; Wright, Glenn A

    2017-02-01

    Lockie, RG, Jalilvand, F, Orjalo, AJ, Giuliano, DV, Moreno, MR, and Wright, GA. A methodological report: Adapting the 505 change-of-direction speed test specific to American football. J Strength Cond Res 31(2): 539-547, 2017-The 505 involves a 10-m sprint past a timing gate, followed by a 180° change-of-direction (COD) performed over 5 m. This methodological report investigated an adapted 505 (A505) designed to be football-specific by changing the distances to 10 and 5 yd. Twenty-five high school football players (6 linemen [LM]; 8 quarterbacks, running backs, and linebackers [QB/RB/LB]; 11 receivers and defensive backs [R/DB]) completed the A505 and 40-yd sprint. The difference between A505 and 0 to 10-yd time determined the COD deficit for each leg. In a follow-up session, 10 subjects completed the A505 again and 10 subjects completed the 505. Reliability was analyzed by t-tests to determine between-session differences, typical error (TE), and coefficient of variation. Test usefulness was examined via TE and smallest worthwhile change (SWC) differences. Pearson's correlations calculated relationships between the A505 and 505, and A505 and COD deficit with the 40-yd sprint. A 1-way analysis of variance (p ≤ 0.05) derived between-position differences in the A505 and COD deficit. There were no between-session differences for the A505 (p = 0.45-0.76; intraclass correlation coefficient = 0.84-0.95; TE = 2.03-4.13%). Additionally, the A505 was capable of detecting moderate performance changes (SWC0.5 > TE). The A505 correlated with the 505 and 40-yard sprint (r = 0.58-0.92), suggesting the modified version assessed similar qualities. Receivers and defensive backs were faster than LM in the A505 for both legs, and right-leg COD deficit. Quarterbacks, running backs, and linebackers were faster than LM in the right-leg A505. The A505 is reliable, can detect moderate performance changes, and can discriminate between football position groups.

  8. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  9. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  10. Efficient sequential and parallel algorithms for record linkage.

    PubMed

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  11. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  12. Size exclusion chromatography with superficially porous particles.

    PubMed

    Schure, Mark R; Moran, Robert E

    2017-01-13

    A comparison is made using size-exclusion chromatography (SEC) of synthetic polymers between fully porous particles (FPPs) and superficially porous particles (SPPs) with similar particle diameters, pore sizes and equal flow rates. Polystyrene molecular weight standards with a mobile phase of tetrahydrofuran are utilized for all measurements conducted with standard HPLC equipment. Although it is traditionally thought that larger pore volume is thermodynamically advantageous in SEC for better separations, SPPs have kinetic advantages and these will be shown to compensate for the loss in pore volume compared to FPPs. The comparison metrics include the elution range (smaller with SPPs), the plate count (larger for SPPs), the rate production of theoretical plates (larger for SPPs) and the specific resolution (larger with FPPs). Advantages to using SPPs for SEC are discussed such that similar separations can be conducted faster using SPPs. SEC using SPPs offers similar peak capacities to that using FPPs but with faster operation. This also suggests that SEC conducted in the second dimension of a two-dimensional liquid chromatograph may benefit with reduced run time and with equivalently reduced peak width making SPPs advantageous for sampling the first dimension by the second dimension separator. Additional advantages are discussed for biomolecules along with a discussion of optimization criteria for size-based separations. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Optimizing Requirements Decisions with KEYS

    NASA Technical Reports Server (NTRS)

    Jalali, Omid; Menzies, Tim; Feather, Martin

    2008-01-01

    Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.

  14. The effects of multiple obstacles on the locomotor behavior and performance of a terrestrial lizard.

    PubMed

    Parker, Seth E; McBrayer, Lance D

    2016-04-01

    Negotiation of variable terrain is important for many small terrestrial vertebrates. Variation in the running surface resulting from obstacles (woody debris, vegetation, rocks) can alter escape paths and running performance. The ability to navigate obstacles likely influences survivorship through predator evasion success and other key ecological tasks (finding mates, acquiring food). Earlier work established that running posture and sprint performance are altered when organisms face an obstacle, and yet studies involving multiple obstacles are limited. Indeed, some habitats are cluttered with obstacles, whereas others are not. For many species, obstacle density may be important in predator escape and/or colonization potential by conspecifics. This study examines how multiple obstacles influence running behavior and locomotor posture in lizards. We predict that an increasing number of obstacles will increase the frequency of pausing and decrease sprint velocity. Furthermore, bipedal running over multiple obstacles is predicted to maintain greater mean sprint velocity compared with quadrupedal running, thereby revealing a potential advantage of bipedalism. Lizards were filmed running through a racetrack with zero, one or two obstacles. Bipedal running posture over one obstacle was significantly faster than quadrupedal posture. Bipedal running trials contained fewer total strides than quadrupedal ones. But on addition of a second obstacle, the number of bipedal strides decreased. Increasing obstacle number led to slower and more intermittent locomotion. Bipedalism provided clear advantages for one obstacle, but was not associated with further benefits for an additional obstacle. Hence, bipedalism helps mitigate obstacle negotiation, but not when numerous obstacles are encountered in succession. © 2016. Published by The Company of Biologists Ltd.

  15. Real-world hydrologic assessment of a fully-distributed hydrological model in a parallel computing environment

    NASA Astrophysics Data System (ADS)

    Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.

    2011-10-01

    SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.

  16. Innovations for competitiveness: European views on "better-faster-cheaper"

    NASA Astrophysics Data System (ADS)

    Atzei, A.; Groepper, P.; Novara, M.; Pseiner, K.

    1999-09-01

    The paper elaborates on " lessons learned" from two recent ESA workshops, one focussing on the role of Innovation in the competitiveness of the space sector and the second on technology and engineering aspects conducive to better, faster and cheaper space programmes. The paper focuses primarily on four major aspects, namely: a) the adaptations of industrial and public organisations to the global market needs; b) the understanding of the bottleneck factors limiting competitiveness; c) the trends toward new system architectures and new engineering and production methods; d) the understanding of the role of new technology in the future applications. Under the pressure of market forces and the influence of many global and regional players, applications of space systems and technology are becoming more and more competitive. It is well recognised that without major effort for innovation in industrial practices, organisations, R&D, marketing and financial approaches the European space sector will stagnate and loose its competence as well as its competitiveness. It is also recognised that a programme run according to the "better, faster, cheaper" philosophy relies on much closer integration of system design, development and verification, and draws heavily on a robust and comprehensive programme of technology development, which must run in parallel and off-line with respect to flight programmes. A company's innovation capabilities will determine its future competitive advantage (in time, cost, performance or value) and overall growth potential. Innovation must be a process that can be counted on to provide repetitive, sustainable, long-term performance improvements. As such, it needs not depend on great breakthroughs in technology and concepts (which are accidental and rare). Rather, it could be based on bold evolution through the establishment of know-how, application of best practices, process effectiveness and high standards, performance measurement, and attention to customers and professional marketing. Having a technological lead allows industry to gain a competitive advantage in performance, cost and opportunities. Instrumental to better competitiveness is an R&D effort based on the adaptation of high technology products, capable of capturing new users, increasing production, decreasing the cost and delivery time and integrating high level of intelligence, information and autonomy. New systems will have to take in to account from the start what types of technologies are being developed or are already available in other areas outside space, and design their system accordingly. The future challenge for "faster, better, cheaper" appears to concern primarily "cost-effective", performant autonomous spacecraft, "cost-effective", reliable launching means and intelligent data fusion technologies and robust software serving mass- market real time services, distributed via EHF bands and Internet. In conclusion, it can be noticed that in the past few years new approaches have considerably enlarged the ways in which space missions can be implemented. They are supported by true innovations in mission concepts, system architecture, development and technologies, in particular for the development of initiatives based on multi-mission mini-satellites platforms for communication and Earth observation missions. There are also definite limits to cost cutting (such as lowering heads counts and increasing efficiency), and therefore the strategic perspective must be shifted from the present emphasis on cost-driven enhancement to revenue-driven improvements for growth. And since the product life-cycle is continuously shortening, competitiveness is linked very strongly with the capability to generate new technology products which enhance cost/benefit performance.

  17. Refocusing from a plenoptic camera within seconds on a mobile phone

    NASA Astrophysics Data System (ADS)

    Gómez-Cárdenes, Ã.`scar; Marichal-Hernández, José G.; Rosa, Fernando L.; Lüke, Jonas P.; Fernández-Valdivia, Juan José; Rodríguez-Ramos, José M.

    2014-05-01

    Refocusing a plenoptic image by digital means and after the exposure has been thoroughly studied in the last years, but few efforts have been made in the direction of real time implementation in a constrained environment such as that provided by current mobile phones and tablets. In this work we address the aforementioned challenge demonstrating that a complete focal stack, comprising 31 refocused planes from a (256ff16)2 plenoptic image, can be achieved within seconds by a current SoC mobile phone platform. The election of an appropriate algorithm is the key to success. In a previous work we developed an algorithm, the fast approximate 4D:3D discrete Radon transform, that performs this task with linear time complexity where others obtain quadratic or linearithmic time complexity. Moreover, that algorithm does not requires complex number transforms, trigonometric calculus nor even multiplications nor oat numbers. Our algorithm has been ported to a multi core ARM chip on an off-the-shelf tablet running Android. A careful implementation exploiting parallelism at several levels has been necessary. The final implementation takes advantage of multi-threading in native code and NEON SIMD instructions. As a result our current implementation completes the refocusing task within seconds for a 16 megapixels image, much faster than previous attempts running on powerful PC platforms or dedicated hardware. The times consumed by the different stages of the digital refocusing are given and the strategies to achieve this result are discussed. Time results are given for a variety of environments within Android ecosystem, from the weaker/cheaper SoCs to the top of the line for 2013.

  18. Differential correlation for sequencing data.

    PubMed

    Siska, Charlotte; Kechris, Katerina

    2017-01-19

    Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.

  19. ARKS: chromosome-scale scaffolding of human genome drafts with linked read kmers.

    PubMed

    Coombe, Lauren; Zhang, Jessica; Vandervalk, Benjamin P; Chu, Justin; Jackman, Shaun D; Birol, Inanc; Warren, René L

    2018-06-20

    The long-range sequencing information captured by linked reads, such as those available from 10× Genomics (10xG), helps resolve genome sequence repeats, and yields accurate and contiguous draft genome assemblies. We introduce ARKS, an alignment-free linked read genome scaffolding methodology that uses linked reads to organize genome assemblies further into contiguous drafts. Our approach departs from other read alignment-dependent linked read scaffolders, including our own (ARCS), and uses a kmer-based mapping approach. The kmer mapping strategy has several advantages over read alignment methods, including better usability and faster processing, as it precludes the need for input sequence formatting and draft sequence assembly indexing. The reliance on kmers instead of read alignments for pairing sequences relaxes the workflow requirements, and drastically reduces the run time. Here, we show how linked reads, when used in conjunction with Hi-C data for scaffolding, improve a draft human genome assembly of PacBio long-read data five-fold (baseline vs. ARKS NG50 = 4.6 vs. 23.1 Mbp, respectively). We also demonstrate how the method provides further improvements of a megabase-scale Supernova human genome assembly (NG50 = 14.74 Mbp vs. 25.94 Mbp before and after ARKS), which itself exclusively uses linked read data for assembly, with an execution speed six to nine times faster than competitive linked read scaffolders (~ 10.5 h compared to 75.7 h, on average). Following ARKS scaffolding of a human genome 10xG Supernova assembly (of cell line NA12878), fewer than 9 scaffolds cover each chromosome, except the largest (chromosome 1, n = 13). ARKS uses a kmer mapping strategy instead of linked read alignments to record and associate the barcode information needed to order and orient draft assembly sequences. The simplified workflow, when compared to that of our initial implementation, ARCS, markedly improves run time performances on experimental human genome datasets. Furthermore, the novel distance estimator in ARKS utilizes barcoding information from linked reads to estimate gap sizes. It accomplishes this by modeling the relationship between known distances of a region within contigs and calculating associated Jaccard indices. ARKS has the potential to provide correct, chromosome-scale genome assemblies, promptly. We expect ARKS to have broad utility in helping refine draft genomes.

  20. MOVES-Matrix and distributed computing for microscale line source dispersion analysis.

    PubMed

    Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L

    2017-07-01

    MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and concentration results as traditional way to use MOVES with AERMOD and CALINE4, which are regulatory models approved by the U.S. EPA for conformity analysis, but the approach is more than 200 times faster than implementing the MOVES model. We highlighted the potentially significant benefit of using CALINE4 as screening tool for identifying potential area that may exceeds air quality standards before using AERMOD, which requires much more meteorology input than CALINE4.

  1. Technical strategy of triple jump: differences of inverted pendulum model between hop-dominated and balance techniques.

    PubMed

    Fujibayashi, Nobuaki; Otsuka, Mitsuo; Yoshioka, Shinsuke; Isaka, Tadao

    2017-10-24

    The present study aims to cross-sectionally clarify the characteristics of the motions of an inverted pendulum model, a stance leg, a swing leg and arms in different triple-jumping techniques to understand whether or not hop displacement is relatively longer rather than step and jump displacements. Eighteen male athletes performed the triple jump with a full run-up. Based on the technique of the jumpers, they were classified as hop-dominated (n = 10) or balance (n = 8) jumpers. The kinematic data were calculated using motion capture and compared between the two techniques using the inverted pendulum model. The hop-dominated jumpers had a significantly longer hop displacement and faster vertical centre-of-mass (COM) velocity of their whole body at hop take-off, which was generated by faster rotation behaviours of inverted pendulum model and faster swinging behaviours of arms. Conversely, balance jumpers had a significantly longer jump displacement and faster horizontal COM velocity of their whole body at take-off, which was generated by a stiffer inverted pendulum model and stance leg. The results demonstrate that hop-dominated and balance jumpers enhanced each dominated-jump displacement using different swing- and stance-leg motions. This information may help to enhance the actual displacement of triple jumpers using different jumping techniques.

  2. Influence of running velocity on vertical, leg and joint stiffness : modelling and recommendations for future research.

    PubMed

    Brughelli, Matt; Cronin, John

    2008-01-01

    Human running can be modelled as either a spring-mass model or multiple springs in series. A force is required to stretch or compress the spring, and thus stiffness, the variable of interest in this paper, can be calculated from the ratio of this force to the change in spring length. Given the link between force and length change, muscle stiffness and mechanical stiffness have been areas of interest to researchers, clinicians, and strength and conditioning practitioners for many years. This review focuses on mechanical stiffness, and in particular, vertical, leg and joint stiffness, since these are the only stiffness types that have been directly calculated during human running. It has been established that as running velocity increases from slow-to-moderate values, leg stiffness remains constant while both vertical stiffness and joint stiffness increase. However, no studies have calculated vertical, leg or joint stiffness over a range of slow-to-moderate values to maximum values in an athletic population. Therefore, the effects of faster running velocities on stiffness are relatively unexplored. Furthermore, no experimental research has examined the effects of training on vertical, leg or joint stiffness and the subsequent effects on running performance. Various methods of training (Olympic style weightlifting, heavy resistance training, plyometrics, eccentric strength training) have shown to be effective at improving running performance. However, the effects of these training methods on vertical, leg and joint stiffness are unknown. As a result, the true importance of stiffness to running performance remains unexplored, and the best practice for changing stiffness to optimize running performance is speculative at best. It is our hope that a better understanding of stiffness, and the influence of running speed on stiffness, will lead to greater interest and an increase in experimental research in this area.

  3. Temporal variation of dose rate distribution around the Fukushima Daiichi nuclear power station using unmanned helicopter.

    PubMed

    Sanada, Yukihisa; Orita, Tadashi; Torii, Tatsuo

    2016-12-01

    Aerial radiological survey using an unmanned aerial vehicle (UAV) was applied to measurement surface contamination around the Fukushima Daiichi nuclear power station (FDNPS). An unmanned helicopter monitoring system (UHMS) was developed to survey the environmental effect of radioactive cesium scattered as a result of the FDNPS accident. The UHMS was used to monitor the area surrounding the FDNPS six times from 2012 to 2015. Quantitative changes in the radioactivity distribution trend were revealed from the results of these monitoring runs. With this information, we found that the actual reduction of dose rate was faster than the one calculated with radiocesium physical half-life. It is indicated that the attenuation effect of radiation by radiocesium penetration in soil is dominant as for reason of reduction of dose rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.

    NASA Technical Reports Server (NTRS)

    Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.

    1973-01-01

    A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.

  5. Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2013-09-01

    The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.

  6. Study of Rubber Composites with Positron Doppler Broadening Spectroscopy: Consideration of Counting Rate

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Quarles, C. A.

    2007-10-01

    We have used positron Doppler Broadening Spectroscopy (DBS) to investigate the uniformity of rubber-carbon black composite samples. The amount of carbon black added to a rubber sample is characterized by phr, the number of grams of carbon black per hundred grams of rubber. Typical concentrations in rubber tires are 50 phr. It has been shown that the S parameter measured by DBS depends on the phr of the sample, so the variation in carbon black concentration can be easily measured to 0.5 phr. In doing the experiments we observed a dependence of the S parameter on small variation in the counting rate or deadtime. By carefully calibrating this deadtime correction we can significantly reduce the experimental run time and thus make faster determination of the uniformity of extended samples.

  7. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    PubMed

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  8. Resolution, Scales and Predictability: Is High Resolution Detrimental To Predictability At Extended Forecast Times?

    NASA Astrophysics Data System (ADS)

    Mesinger, F.

    The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.

  9. Certification trails and software design for testability

    NASA Technical Reports Server (NTRS)

    Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.

    1993-01-01

    Design techniques which may be applied to make program testing easier were investigated. Methods for modifying a program to generate additional data which we refer to as a certification trail are presented. This additional data is designed to allow the program output to be checked more quickly and effectively. Certification trails were described primarily from a theoretical perspective. A comprehensive attempt to assess experimentally the performance and overall value of the certification trail method is reported. The method was applied to nine fundamental, well-known algorithms for the following problems: convex hull, sorting, huffman tree, shortest path, closest pair, line segment intersection, longest increasing subsequence, skyline, and voronoi diagram. Run-time performance data for each of these problems is given, and selected problems are described in more detail. Our results indicate that there are many cases in which certification trails allow for significantly faster overall program execution time than a 2-version programming approach, and also give further evidence of the breadth of applicability of this method.

  10. Next Processor Module: A Hardware Accelerator of UT699 LEON3-FT System for On-Board Computer Software Simulation

    NASA Astrophysics Data System (ADS)

    Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David

    2014-08-01

    On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.

  11. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  12. Influence of music on maximal self-paced running performance and passive post-exercise recovery rate.

    PubMed

    Lee, S; Kimmerly, D

    2014-10-30

    The purpose of this study was to examine the influence of fast tempo music (FM) on self--paced running performance (heart rate, running speed, ratings of perceived exertion), and slow tempo music (SM) on post--exercise heart rate and blood lactate recovery rates. Twelve participants (5 Women) completed three randomly assigned conditions: static noise (control), FM and SM. Each condition consisted of self--paced treadmill running, and supine post--exercise recovery periods (20 min each). Average running speed, heart rate (HR) and ratings of perceived exertion (RPE) were measured during the treadmill running period, while HR and blood lactate were measured during the recovery period. Listening to FM during exercise resulted in a faster self--selected running speed (10.8 ± 1.7 vs. 9.9 ± 1.4 km•hour--1, p<0.001) and higher peak HR (184 ± 12 vs. 177 ± 17 beats•min--1, p< 0.01) without a corresponding difference in peak RPE (FM, 16.8 ± 1.8 vs. SM 15.7 ± 1.9, p= 0.10). Listening to SM during the post--exercise period reduced HR throughout (main effect p<0.001) and blood lactate at the end of recovery (2.8 ± 0.4 vs. 4.7 ± 0.8 mmol•L--1, p<0.05). Listening to FM during exercise can increase self--paced intensity without altering perceived exertion levels while listening to SM after exercise can accelerate the recovery rate back to resting levels.

  13. The influence of maximum running speed on eye size: a test of Leuckart's Law in mammals.

    PubMed

    Heard-Booth, Amber N; Kirk, E Christopher

    2012-06-01

    Vertebrate eye size is influenced by many factors, including body or head size, diet, and activity pattern. Locomotor speed has also been suggested to influence eye size in a relationship known as Leuckart's Law. Leuckart's Law proposes that animals capable of achieving fast locomotor speeds require large eyes to enhance visual acuity and avoid collisions with environmental obstacles. The selective influence of rapid flight has been invoked to explain the relatively large eyes of birds, but Leuckart's Law remains untested in nonavian vertebrates. This study investigates the relationship between eye size and maximum running speed in a diverse sample of mammals. Measures of axial eye diameter, maximum running speed, and body mass were collected from the published literature for 50 species from 10 mammalian orders. This analysis reveals that absolute eye size is significantly positively correlated with maximum running speed in mammals. Moreover, the relationship between eye size and running speed remains significant when the potentially confounding effects of body mass and phylogeny are statistically controlled. The results of this analysis are therefore consistent with the expectations of Leuckart's Law and demonstrate that faster-moving mammals have larger eyes than their slower-moving close relatives. Accordingly, we conclude that maximum running speed is one of several key selective factors that have influenced the evolution of eye size in mammals. Copyright © 2012 Wiley Periodicals, Inc.

  14. Revisiting the child health-wealth nexus.

    PubMed

    Fakir, Adnan M S

    2016-12-01

    The causal link between a household's economic standing and child health is known to suffer from endogeneity. While past studies have exemplified the causal link to be small, albeit statistically significant, this paper aims to estimate the causal effect to investigate whether the effect of income after controlling for the endogeneity remains small in the long run. By correcting for the bias, and knowing the bias direction, one can also infer about the underlying backward effect. This paper uses an instrument variables two-stage-least-squares estimation on the Young Lives 2009 cross-sectional dataset from Andhra Pradesh, India, to understand the aforementioned relationship. The selected measure of household economic standing differentially affects the estimation. There is significant positive effect of both short-run household expenditure and long-run household wealth on child stunting, with the latter having a larger impact. The backward link running from child health to household income is likely an inverse association in our sample with lower child health inducing higher earnings. While higher average community education improved child health, increased community entertainment expenditure is found to have a negative effect. While policies catered towards improving household wealth will decrease child stunting in the long run, maternal education and the community play an equally reinforcing role in improving child health and are perhaps faster routes to achieving the goal of better child health in the short run.

  15. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    NASA Astrophysics Data System (ADS)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.

  16. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  19. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    PubMed

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  20. Methods for semi-automated indexing for high precision information retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  1. Methods for Semi-automated Indexing for High Precision Information Retrieval

    PubMed Central

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    Objective. To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Design. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Participants. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Measurements. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Results. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Summary. Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy. PMID:12386114

  2. Elite triathletes in 'Ironman Hawaii' get older but faster.

    PubMed

    Gallmann, Dalia; Knechtle, Beat; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2014-02-01

    The age of peak performance has been well investigated for elite athletes in endurance events such as marathon running, but not for ultra-endurance (>6 h) events such as an Ironman triathlon covering 3.8 km swimming, 180 km cycling and 42 km running. The aim of this study was to analyze the changes in the age and performances of the annual top ten women and men at the Ironman World Championship the 'Ironman Hawaii' from 1983 to 2012. Age and performances of the annual top ten women and men in overall race time and in each split discipline were analyzed. The age of the annual top ten finishers increased over time from 26 ± 5 to 35 ± 5 years (r (2) = 0.35, P < 0.01) for women and from 27 ± 2 to 34 ± 3 years (r (2) = 0.28, P < 0.01) for men. Overall race time of the annual top ten finishers decreased across years from 671 ± 16 to 566 ± 8 min (r (2) = 0.44, P < 0.01) for women and from 583 ± 24 to 509 ± 6 min (r (2) = 0.41, P < 0.01) for men. To conclude, the age of annual top ten female and male triathletes in the 'Ironman Hawaii' increased over the last three decades while their performances improved. These findings suggest that the maturity of elite long-distance triathletes has changed during this period and raises the question of the upper limits of the age of peak performance in elite ultra-endurance performance.

  3. Parallel Wavefront Analysis for a 4D Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  4. Low fitness, low body mass and prior injury predict injury risk during military recruit training: a prospective cohort study in the British Army

    PubMed Central

    Robinson, Mark; Siddall, Andrew; Bilzon, James; Thompson, Dylan; Greeves, Julie; Izard, Rachel; Stokes, Keith

    2016-01-01

    Background Injuries sustained by military recruits during initial training impede training progression and military readiness while increasing financial costs. This study investigated training-related injuries and injury risk factors among British Army infantry recruits. Methods Recruits starting infantry training at the British Army Infantry Training Centre between September 2008 and March 2010 were eligible to take part. Information regarding lifestyle behaviours and injury history was collected using the Military Pre-training Questionnaire. Sociodemographic, anthropometric, physical fitness and injury (lower limb and lower back) data were obtained from Army databases. Univariable and multivariable Cox regression models were used to explore the association between time to first training injury and potential risk factors. Results 58% (95% CI 55% to 60%) of 1810 recruits sustained at least 1 injury during training. Overuse injuries were more common than traumatic injuries (65% and 35%, respectively). The lower leg accounted for 81% of all injuries, and non-specific soft tissue damage was the leading diagnosis (55% of all injuries). Injuries resulted in 122 (118 to 126) training days lost per 1000 person-days. Slower 2.4 km run time, low body mass, past injury and shin pain were independently associated with higher risk of any injury. Conclusions There was a high incidence of overuse injuries in British Army recruits undertaking infantry training. Recruits with lower pretraining fitness levels, low body mass and past injuries were at higher risk. Faster 2.4 km run time performance and minimal body mass standards should be considered for physical entry criteria. PMID:27900170

  5. Central gene expression changes associated with enhanced neuroendocrine and autonomic response habituation to repeated noise stress after voluntary wheel running in rats

    PubMed Central

    Sasse, Sarah K.; Nyhuis, Tara J.; Masini, Cher V.; Day, Heidi E. W.; Campeau, Serge

    2013-01-01

    Accumulating evidence indicates that regular physical exercise benefits health in part by counteracting some of the negative physiological impacts of stress. While some studies identified reductions in some measures of acute stress responses with prior exercise, limited data were available concerning effects on cardiovascular function, and reported effects on hypothalamic-pituitary-adrenocortical (HPA) axis responses were largely inconsistent. Given that exposure to repeated or prolonged stress is strongly implicated in the precipitation and exacerbation of illness, we proposed the novel hypothesis that physical exercise might facilitate adaptation to repeated stress, and subsequently demonstrated significant enhancement of both HPA axis (glucocorticoid) and cardiovascular (tachycardia) response habituation to repeated noise stress in rats with long-term access to running wheels compared to sedentary controls. Stress habituation has been attributed to modifications of brain circuits, but the specific sites of adaptation and the molecular changes driving its expression remain unclear. Here, in situ hybridization histochemistry was used to examine regulation of select stress-associated signaling systems in brain regions representing likely candidates to underlie exercise-enhanced stress habituation. Analyzed brains were collected from active (6 weeks of wheel running) and sedentary rats following control, acute, or repeated noise exposures that induced a significantly faster rate of glucocorticoid response habituation in active animals but preserved acute noise responsiveness. Nearly identical experimental manipulations also induce a faster rate of cardiovascular response habituation in exercised, repeatedly stressed rats. The observed regulation of the corticotropin-releasing factor and brain-derived neurotrophic factor systems across several brain regions suggests widespread effects of voluntary exercise on central functions and related adaptations to stress across multiple response modalities. PMID:24324441

  6. The influence of training and mental skills preparation on injury incidence and performance in marathon runners.

    PubMed

    Hamstra-Wright, Karrie L; Coumbe-Lilley, John E; Kim, Hajwa; McFarland, Jose A; Huxel Bliven, Kellie C

    2013-10-01

    There has been a considerable increase in the number of participants running marathons over the past several years. The 26.2-mile race requires physical and mental stamina to successfully complete it. However, studies have not investigated how running and mental skills preparation influence injury and performance. The purpose of our study was to describe the training and mental skills preparation of a typical group of runners as they began a marathon training program, assess the influence of training and mental skills preparation on injury incidence, and examine how training and mental skills preparation influence marathon performance. Healthy adults (N = 1,957) participating in an 18-week training program for a fall 2011 marathon were recruited for the study. One hundred twenty-five runners enrolled and received 4 surveys: pretraining, 6 weeks, 12 weeks, posttraining. The pretraining survey asked training and mental skills preparation questions. The 6- and 12-week surveys asked about injury incidence. The posttraining survey asked about injury incidence and marathon performance. Tempo runs during training preparation had a significant positive relationship to injury incidence in the 6-week survey (ρ[93] = 0.26, p = 0.01). The runners who reported incorporating tempo and interval runs, running more miles per week, and running more days per week in their training preparation ran significantly faster than did those reporting less tempo and interval runs, miles per week, and days per week (p ≤ 0.05). Mental skills preparation did not influence injury incidence or marathon performance. To prevent injury, and maximize performance, while marathon training, it is important that coaches and runners ensure that a solid foundation of running fitness and experience exists, followed by gradually building volume, and then strategically incorporating runs of various speeds and distances.

  7. Joint power and kinematics coordination in load carriage running: Implications for performance and injury.

    PubMed

    Liew, Bernard X W; Morris, Susan; Netto, Kevin

    2016-06-01

    Investigating the impact of incremental load magnitude on running joint power and kinematics is important for understanding the energy cost burden and potential injury-causative mechanisms associated with load carriage. It was hypothesized that incremental load magnitude would result in phase-specific, joint power and kinematic changes within the stance phase of running, and that these relationships would vary at different running velocities. Thirty-one participants performed running while carrying three load magnitudes (0%, 10%, 20% body weight), at three velocities (3, 4, 5m/s). Lower limb trajectories and ground reaction forces were captured, and global optimization was used to derive the variables. The relationships between load magnitude and joint power and angle vectors, at each running velocity, were analyzed using Statistical Parametric Mapping Canonical Correlation Analysis. Incremental load magnitude was positively correlated to joint power in the second half of stance. Increasing load magnitude was also positively correlated with alterations in three dimensional ankle angles during mid-stance (4.0 and 5.0m/s), knee angles at mid-stance (at 5.0m/s), and hip angles during toe-off (at all velocities). Post hoc analyses indicated that at faster running velocities (4.0 and 5.0m/s), increasing load magnitude appeared to alter power contribution in a distal-to-proximal (ankle→hip) joint sequence from mid-stance to toe-off. In addition, kinematic changes due to increasing load influenced both sagittal and non-sagittal plane lower limb joint angles. This study provides a list of plausible factors that may influence running energy cost and injury risk during load carriage running. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Maximal metabolic rates during voluntary exercise, forced exercise, and cold exposure in house mice selectively bred for high wheel-running.

    PubMed

    Rezende, Enrico L; Chappell, Mark A; Gomes, Fernando R; Malisch, Jessica L; Garland, Theodore

    2005-06-01

    Selective breeding for high wheel-running activity has generated four lines of laboratory house mice (S lines) that run about 170% more than their control counterparts (C lines) on a daily basis, mostly because they run faster. We tested whether maximum aerobic metabolic rates (V(O2max)) have evolved in concert with wheel-running, using 48 females from generation 35. Voluntary activity and metabolic rates were measured on days 5+6 of wheel access (mimicking conditions during selection), using wheels enclosed in metabolic chambers. Following this, V(O2max) was measured twice on a motorized treadmill and twice during cold-exposure in a heliox atmosphere (HeO2). Almost all measurements, except heliox V(O2max), were significantly repeatable. After accounting for differences in body mass (S < C) and variation in age at testing, S and C did not differ in V(O2max) during forced exercise or in heliox, nor in maximal running speeds on the treadmill. However, running speeds and V(O2max) during voluntary exercise were significantly higher in S lines. Nevertheless, S mice never voluntarily achieved the V(O2max) elicited during their forced treadmill trials, suggesting that aerobic capacity per se is not limiting the evolution of even higher wheel-running speeds in these lines. Our results support the hypothesis that S mice have genetically higher motivation for wheel-running and they demonstrate that behavior can sometimes evolve independently of performance capacities. We also discuss the possible importance of domestication as a confounding factor to extrapolate results from this animal model to natural populations.

  9. Development of the Rice Convection Model as a Space Weather Tool

    DTIC Science & Technology

    2015-05-31

    coupled to the ionosphere that is suitable for both scientific studies as well as a prediction tool. We are able to run the model faster than “real...of work by finding ways to fund a more systematic effort in making the RCM a space weather prediction tool for magnetospheric and ionospheric studies...convection electric field, total electron content, TEC, ionospheric convection, plasmasphere 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT

  10. Transfers and Enhancements of the Teleconferencing System and Support of the Special Operations Planning Aids

    DTIC Science & Technology

    1984-10-31

    five colors , page forward, page back, erase, clear the page, store previously annotated material, and later retrieve it. From this developed a four...system to secure sites. These * enchancements are discussed below. -2- .7- -. . . --. J -. . . . .. . . . . . . . ..- . _77 . -.- 2.1 Enhancements to the...and large cache memory of the Winchester drive allows the SGWS software to run much faster when doing file access or direct memory access (DMA) than

  11. JPRS Report, China, Red Flag, Number 10, 16 May 1988

    DTIC Science & Technology

    1988-07-18

    A Basic Train of Thought in Revitalizing the Machine-Building Industry [Zou JiahuaJ 5 Properly Run ’Inside-Factory Banks,’ Improve Enterprise...while making a study of industry and commerce, we cannot make a study of industry or commerce by itself. Instead of partially looking at a question, we...done too much. With good intentions, some comrades expect our econ- omy, industry in particular, to develop relatively faster. However, they have

  12. Robust and Rapid Air-Borne Odor Tracking without Casting1,2,3

    PubMed Central

    Bhattacharyya, Urvashi

    2015-01-01

    Abstract Casting behavior (zigzagging across an odor stream) is common in air/liquid-borne odor tracking in open fields; however, terrestrial odor localization often involves path selection in a familiar environment. To study this, we trained rats to run toward an odor source in a multi-choice olfactory arena with near-laminar airflow. We find that rather than casting, rats run directly toward an odor port, and if this is incorrect, they serially sample other sources. This behavior is consistent and accurate in the presence of perturbations, such as novel odors, background odor, unilateral nostril stitching, and turbulence. We developed a model that predicts that this run-and-scan tracking of air-borne odors is faster than casting, provided there are a small number of targets at known locations. Thus, the combination of best-guess target selection with fallback serial sampling provides a rapid and robust strategy for finding odor sources in familiar surroundings. PMID:26665165

  13. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  14. Cost minimisation analysis: kilovoltage imaging with automated repositioning versus electronic portal imaging in image-guided radiotherapy for prostate cancer.

    PubMed

    Gill, S; Younie, S; Rolfo, A; Thomas, J; Siva, S; Fox, C; Kron, T; Phillips, D; Tai, K H; Foroudi, F

    2012-10-01

    To compare the treatment time and cost of prostate cancer fiducial marker image-guided radiotherapy (IGRT) using orthogonal kilovoltage imaging (KVI) and automated couch shifts and orthogonal electronic portal imaging (EPI) and manual couch shifts. IGRT treatment delivery times were recorded automatically on either unit. Costing was calculated from real costs derived from the implementation of a new radiotherapy centre. To derive cost per minute for EPI and KVI units the total annual setting up and running costs were divided by the total annual working time. The cost per IGRT fraction was calculated by multiplying the cost per minute by the duration of treatment. A sensitivity analysis was conducted to test the robustness of our analysis. Treatment times without couch shift were compared. Time data were analysed for 8648 fractions, 6057 from KVI treatment and 2591 from EPI treatment from a total of 294 patients. The median time for KVI treatment was 6.0 min (interquartile range 5.1-7.4 min) and for EPI treatment it was 10.0 min (interquartile range 8.3-11.8 min) (P value < 0.0001). The cost per fraction for KVI was A$258.79 and for EPI was A$345.50. The cost saving per fraction for KVI varied between A$66.09 and A$101.64 by sensitivity analysis. In patients where no couch shift was made, the median treatment delivery time for EPI was 8.8 min and for KVI was 5.1 min. Treatment time is less on KVI units compared with EPI units. This is probably due to automation of couch shift and faster evaluation of imaging on KVI units. Annual running costs greatly outweigh initial setting up costs and therefore the cost per fraction was less with KVI, despite higher initial costs. The selection of appropriate IGRT equipment can make IGRT practical within radiotherapy departments. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  15. Effects of Turbulence Model and Numerical Time Steps on Von Karman Flow Behavior and Drag Accuracy of Circular Cylinder

    NASA Astrophysics Data System (ADS)

    Amalia, E.; Moelyadi, M. A.; Ihsan, M.

    2018-04-01

    The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.

  16. Multi-Pivot Quicksort: an Experiment with Single, Dual, Triple, Quad, and Penta-Pivot Quicksort Algorithms in Python

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Zamzami, E. M.; Rachmawati, D.

    2017-03-01

    Dual-pivot quicksort, which was proposed by Yaroslavsky, has been experimentally proven to be more efficient than the classical single-pivot quicksort under the Java Virtual Machine [6]. Moreover, Kushagara, López-Ortiz, and Munro [4] has shown that triple-pivot quicksort runs 7-8% faster than dual-pivot quicksort in C, mutatis mutandis. In this research, we implement and experiment with single, dual, triple, quad, and penta-pivot quicksort algorithms in Python. Our experimental results are as follows. Firstly, the quicksort with single pivot is the slowest among the five variants. Secondly, at least until five (penta) pivots are being used, it is proven that the more pivots are used in a quicksort algorithm, the faster its performance becomes. Thirdly, the increase of speed resulted by adding more pivots tends to decrease gradually.

  17. Effects of air temperature and velocity on the drying kinetics and product particle size of starch from arrowroot (Maranta arundinacae)

    NASA Astrophysics Data System (ADS)

    Caparanga, Alvin R.; Reyes, Rachael Anne L.; Rivas, Reiner L.; De Vera, Flordeliza C.; Retnasamy, Vithyacharan; Aris, Hasnizah

    2017-11-01

    This study utilized the 3k factorial design with k as the two varying factors namely, temperature and air velocity. The effects of temperature and air velocity on the drying rate curves and on the average particle diameter of the arrowroot starch were investigated. Extracted arrowroot starch samples were dried based on the designed parameters until constant weight was obtained. The resulting initial moisture content of the arrowroot starch was 49.4%. Higher temperatures correspond to higher drying rates and faster drying time while air velocity effects were approximately negligible or had little effect. Drying rate is a function of temperature and time. The constant rate period was not observed for the drying rate of arrowroot starch. The drying curves were fitted against five mathematical models: Lewis, Page, Henderson and Pabis, Logarithmic and Midili. The Midili Model was the best fit for the experimental data since it yielded the highest R2 and the lowest RSME values for all runs. Scanning electron microscopy (SEM) was used for qualitative analysis and for determination of average particle diameter of the starch granules. The starch granules average particle diameter had a range of 12.06 - 24.60 μm. The use of ANOVA proved that particle diameters for each run varied significantly with each other. And, the Taguchi Design proved that high temperatures yield lower average particle diameter, while high air velocities yield higher average particle diameter.

  18. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  19. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  20. Girls in the boat: Sex differences in rowing performance and participation.

    PubMed

    Keenan, Kevin G; Senefeld, Jonathon W; Hunter, Sandra K

    2018-01-01

    Men outperform women in many athletic endeavors due to physiological and anatomical differences (e.g. larger and faster muscle); however, the observed sex differences in elite athletic performance are typically larger than expected, and may reflect sex-related differences in opportunity or incentives. As collegiate rowing in the United States has been largely incentivized for women over the last 20 years, but not men, the purpose of this study was to examine sex differences in elite rowing performance over that timeframe. Finishing times from grand finale races for collegiate championship on-water performances (n = 480) and junior indoor performances (n = 1,280) were compared between men and women across 20 years (1997-2016), weight classes (heavy vs. lightweight) and finishing place. Participation of the numbers of men and women rowers were also quantified across years. Men were faster than women across all finishing places, weight classes and years of competition and performance declined across finishing place for both men and women (P<0.001). Interestingly, the reduction in performance time across finishing place was greater (P<0.001) for collegiate men compared to women in the heavyweight division. This result is opposite to other sports (e.g. running and swimming), and to lightweight rowing in this study, which provides women fewer incentives than in heavyweight rowing. Correspondingly, participation in collegiate rowing has increased by ~113 women per year (P<0.001), with no change (P = 0.899) for collegiate men. These results indicate that increased participation and incentives within collegiate rowing for women vs. men contribute to sex differences in athletic performance.

  1. Girls in the boat: Sex differences in rowing performance and participation

    PubMed Central

    Senefeld, Jonathon W.; Hunter, Sandra K.

    2018-01-01

    Men outperform women in many athletic endeavors due to physiological and anatomical differences (e.g. larger and faster muscle); however, the observed sex differences in elite athletic performance are typically larger than expected, and may reflect sex-related differences in opportunity or incentives. As collegiate rowing in the United States has been largely incentivized for women over the last 20 years, but not men, the purpose of this study was to examine sex differences in elite rowing performance over that timeframe. Finishing times from grand finale races for collegiate championship on-water performances (n = 480) and junior indoor performances (n = 1,280) were compared between men and women across 20 years (1997–2016), weight classes (heavy vs. lightweight) and finishing place. Participation of the numbers of men and women rowers were also quantified across years. Men were faster than women across all finishing places, weight classes and years of competition and performance declined across finishing place for both men and women (P<0.001). Interestingly, the reduction in performance time across finishing place was greater (P<0.001) for collegiate men compared to women in the heavyweight division. This result is opposite to other sports (e.g. running and swimming), and to lightweight rowing in this study, which provides women fewer incentives than in heavyweight rowing. Correspondingly, participation in collegiate rowing has increased by ~113 women per year (P<0.001), with no change (P = 0.899) for collegiate men. These results indicate that increased participation and incentives within collegiate rowing for women vs. men contribute to sex differences in athletic performance. PMID:29352279

  2. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    PubMed Central

    Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin

    2018-01-01

    Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405

  3. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE PAGES

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...

    2018-01-05

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  4. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  5. BEHAVIOR OF MERCURY DURING DWPF CHEMICAL PROCESS CELL PROCESSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamecnik, J.; Koopman, D.

    2012-04-09

    The Defense Waste Processing Facility has experienced significant issues with the stripping and recovery of mercury in the Chemical Processing Cell (CPC). The stripping rate has been inconsistent, often resulting in extended processing times to remove mercury to the required endpoint concentration. The recovery of mercury in the Mercury Water Wash Tank has never been high, and has decreased significantly since the Mercury Water Wash Tank was replaced after the seventh batch of Sludge Batch 5. Since this time, essentially no recovery of mercury has been seen. Pertinent literature was reviewed, previous lab-scale data on mercury stripping and recovery wasmore » examined, and new lab-scale CPC Sludge Receipt and Adjustment Tank (SRAT) runs were conducted. For previous lab-scale data, many of the runs with sufficient mercury recovery data were examined to determine what factors affect the stripping and recovery of mercury and to improve closure of the mercury material balance. Ten new lab-scale SRAT runs (HG runs) were performed to examine the effects of acid stoichiometry, sludge solids concentration, antifoam concentration, form of mercury added to simulant, presence of a SRAT heel, operation of the SRAT condenser at higher than prototypic temperature, varying noble metals from none to very high concentrations, and higher agitation rate. Data from simulant runs from SB6, SB7a, glycolic/formic, and the HG tests showed that a significant amount of Hg metal was found on the vessel bottom at the end of tests. Material balance closure improved from 12-71% to 48-93% when this segregated Hg was considered. The amount of Hg segregated as elemental Hg on the vessel bottom was 4-77% of the amount added. The highest recovery of mercury in the offgas system generally correlated with the highest retention of Hg in the slurry. Low retention in the slurry (high segregation on the vessel bottom) resulted in low recovery in the offgas system. High agitation rates appear to result in lower retention of mercury in the slurry. Both recovery of mercury in the offgas system and removal (segregation + recovery) from the slurry correlate with slurry consistency. Higher slurry consistency results in better retention of Hg in the slurry (less segregation) and better recovery in the offgas system, but the relationships of recovery and retention with consistency are sludge dependent. Some correlation with slurry yield stress and acid stoichiometry was also found. Better retention of mercury in the slurry results in better recovery in the offgas system because the mercury in the slurry is stripped more easily than the segregated mercury at the bottom of the vessel. Although better retention gives better recovery, the time to reach a particular slurry mercury content (wt%) is longer than if the retention is poorer because the segregation is faster. The segregation of mercury is generally a faster process than stripping. The stripping factor (mass of water evaporated per mass of mercury stripped) of mercury at the start of boiling were found to be less than 1000 compared to the assumed design basis value of 750 (the theoretical factor is 250). However, within two hours, this value increased to at least 2000 lb water per lb Hg. For runs with higher mercury recovery in the offgas system, the stripping factor remained around 2000, but runs with low recovery had stripping factors of 4000 to 40,000. DWPF data shows similar trends with the stripping factor value increasing during boiling. These high values correspond to high segregation and low retention of mercury in the sludge. The stripping factor for a pure Hg metal bead in water was found to be about 10,000 lb/lb. About 10-36% of the total Hg evaporated in a SRAT cycle was refluxed back to the SRAT during formic acid addition and boiling. Mercury is dissolved as a result of nitric acid formation from absorption of NO{sub x}. The actual solubility of dissolved mercury in the acidic condensate is about 100 times higher than the actual concentrations measured. Mercury metal present in the MWWT from previous batches could be dissolved by this acidic condensate. The test of the effect of higher SRAT condenser temperature on recovery of mercury in the MWWT and offgas system was inconclusive. The recovery at higher temperature was lower than several low temperature runs, but about the same as other runs. Factors other than temperature appear to affect the mercury recovery. The presence of chloride and iodide in simulants resulted in the formation of mercurous chloride and mercurous iodide, respectively, in the offgas system. Actual waste data shows that the chloride content is much less than the simulant concentrations. Future simulant tests should minimize the addition of chloride. Similarly, iodine addition should be eliminated unless actual waste analyses show it to be present; currently, total iodine is not measured on actual waste samples.« less

  6. Balanced Flow Metering and Conditioning: Technology for Fluid Systems

    NASA Technical Reports Server (NTRS)

    Kelley, Anthony R.

    2006-01-01

    Revolutionary new technology that creates balanced conditions across the face of a multi-hole orifice plate has been developed, patented and exclusively licensed for commercialization. This balanced flow technology simultaneously measures mass flow rate, volumetric flow rate, and fluid density with little or no straight pipe run requirements. Initially, the balanced plate was a drop in replacement for a traditional orifice plate, but testing revealed substantially better performance as compared to the orifice plate such as, 10 times better accuracy, 2 times faster (shorter distance) pressure recovery, 15 times less acoustic noise energy generation, and 2.5 times less permanent pressure loss. During 2004 testing at MSFC, testing revealed several configurations of the balanced flow meter that match the accuracy of Venturi meters while having only slightly more permanent pressure loss. However, the balanced meter only requires a 0.25 inch plate and has no upstream or downstream straight pipe requirements. As a fluid conditioning device, the fluid usually reaches fully developed flow within 1 pipe diameter of the balanced conditioning plate. This paper will describe the basic balanced flow metering technology, provide performance details generated by testing to date and provide implementation details along with calculations required for differing degrees of flow metering accuracy.

  7. Is the future the right time?

    PubMed

    Ouellet, Marc; Santiago, Julio; Israeli, Ziv; Gabay, Shai

    2010-01-01

    Spanish and English speakers tend to conceptualize time as running from left to right along a mental line. Previous research suggests that this representational strategy arises from the participants' exposure to a left-to-right writing system. However, direct evidence supporting this assertion suffers from several limitations and relies only on the visual modality. This study subjected to a direct test the reading hypothesis using an auditory task. Participants from two groups (Spanish and Hebrew) differing in the directionality of their orthographic system had to discriminate temporal reference (past or future) of verbs and adverbs (referring to either past or future) auditorily presented to either the left or right ear by pressing a left or a right key. Spanish participants were faster responding to past words with the left hand and to future words with the right hand, whereas Hebrew participants showed the opposite pattern. Our results demonstrate that the left-right mapping of time is not restricted to the visual modality and that the direction of reading accounts for the preferred directionality of the mental time line. These results are discussed in the context of a possible mechanism underlying the effects of reading direction on highly abstract conceptual representations.

  8. Real-time, resource-constrained object classification on a micro-air vehicle

    NASA Astrophysics Data System (ADS)

    Buck, Louis; Ray, Laura

    2013-12-01

    A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.

  9. Measurement of Vibrations from the 8- by 6-Foot Supersonic Wind Tunnel

    NASA Image and Video Library

    1950-07-21

    Reverend Henry Birkenhauer and E.F. Carome measure ground vibrations on West 220th Street caused by the operation of the 8- by 6-Foot Supersonic Wind Tunnel at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory. The 8- by 6 was the laboratory’s first large supersonic wind tunnel. It was also the NACA’s most powerful supersonic tunnel, and the NACA’s first facility capable of running an engine at supersonic speeds. The 8- by 6 was originally an open-throat and non-return tunnel. This meant that the supersonic air flow was blown through the test section and out the other end into the atmosphere. Complaints from the local community led to the installation of a muffler at the tunnel exit and the eventual addition of a return leg. Reverend Brikenhauer, a seismologist, and Carome, an electrical technician were brought in from John Carroll University to take vibration measurements during the 8- by 6 tunnel’s first run with a supersonic engine. They found that the majority of the vibrations came from the air and not the ground. The tunnel’s original muffler offered some relief during the facility checkout runs, but it proved inadequate during the operation of an engine in the test section. Tunnel operation was suspended until a new muffler was designed and installed. The NACA researchers, however, were pleased with the tunnel’s operation. They claimed it was the first time a jet engine was operated in an airflow faster than Mach 2.

  10. MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program

    NASA Astrophysics Data System (ADS)

    Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.

    2018-02-01

    We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.

  11. Bond Graph Model of Cerebral Circulation: Toward Clinically Feasible Systemic Blood Flow Simulations.

    PubMed

    Safaei, Soroush; Blanco, Pablo J; Müller, Lucas O; Hellevik, Leif R; Hunter, Peter J

    2018-01-01

    We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data.

  12. Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device

    NASA Astrophysics Data System (ADS)

    Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben

    2015-02-01

    The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.

  13. Efficient image compression algorithm for computer-animated images

    NASA Astrophysics Data System (ADS)

    Yfantis, Evangelos A.; Au, Matthew Y.; Miel, G.

    1992-10-01

    An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression algorithms and with the Lempel-Ziv (LZ) coding. The Lempel-Ziv algorithm is available as a utility in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZ algorithm indicate that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.

  14. Advanced fast 3D DSA model development and calibration for design technology co-optimization

    NASA Astrophysics Data System (ADS)

    Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing

    2017-04-01

    Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.

  15. Money for nothing: How firms have financed R&D-projects since the Industrial Revolution

    PubMed Central

    Bakker, Gerben

    2013-01-01

    We investigate the long-run historical pattern of R&D-outlays by reviewing aggregate growth rates and historical cases of particular R&D projects, following the historical-institutional approach of Chandler (1962), North (1981) and Williamson (1985). We find that even the earliest R&D-projects used non-insignificant cash outlays and that until the 1970s aggregate R&D outlays grew far faster than GDP, despite five well-known challenges that implied that R&D could only be financed with cash, for which no perfect market existed: the presence of sunk costs, real uncertainty, long time lags, adverse selection, and moral hazard. We then review a wide variety of organisational forms and institutional instruments that firms historically have used to overcome these financing obstacles, and without which the enormous growth of R&D outlays since the nineteenth century would not have been possible. PMID:24910477

  16. fastSIM: a practical implementation of fast structured illumination microscopy.

    PubMed

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-01-16

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm 2 , free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  17. Money for nothing: How firms have financed R&D-projects since the Industrial Revolution.

    PubMed

    Bakker, Gerben

    2013-12-01

    We investigate the long-run historical pattern of R&D-outlays by reviewing aggregate growth rates and historical cases of particular R&D projects, following the historical-institutional approach of Chandler (1962), North (1981) and Williamson (1985). We find that even the earliest R&D-projects used non-insignificant cash outlays and that until the 1970s aggregate R&D outlays grew far faster than GDP, despite five well-known challenges that implied that R&D could only be financed with cash, for which no perfect market existed: the presence of sunk costs, real uncertainty, long time lags, adverse selection, and moral hazard. We then review a wide variety of organisational forms and institutional instruments that firms historically have used to overcome these financing obstacles, and without which the enormous growth of R&D outlays since the nineteenth century would not have been possible.

  18. Voice Based City Panic Button System

    NASA Astrophysics Data System (ADS)

    Febriansyah; Zainuddin, Zahir; Bachtiar Nappu, M.

    2018-03-01

    The development of voice activated panic button application aims to design faster early notification of hazardous condition in community to the nearest police by using speech as the detector where the current application still applies touch-combination on screen and use coordination of orders from control center then the early notification still takes longer time. The method used in this research was by using voice recognition as the user voice detection and haversine formula for the comparison of closest distance between the user and the police. This research was equipped with auto sms, which sent notification to the victim’s relatives, that was also integrated with Google Maps application (GMaps) as the map to the victim’s location. The results show that voice registration on the application reaches 100%, incident detection using speech recognition while the application is running is 94.67% in average, and the auto sms to the victim relatives reaches 100%.

  19. A new event detector designed for the Seismic Research Observatories

    USGS Publications Warehouse

    Murdock, James N.; Hutt, Charles R.

    1983-01-01

    A new short-period event detector has been implemented on the Seismic Research Observatories. For each signal detected, a printed output gives estimates of the time of onset of the signal, direction of the first break, quality of onset, period and maximum amplitude of the signal, and an estimate of the variability of the background noise. On the SRO system, the new algorithm runs ~2.5x faster than the former (power level) detector. This increase in speed is due to the design of the algorithm: all operations can be performed by simple shifts, additions, and comparisons (floating point operations are not required). Even though a narrow-band recursive filter is not used, the algorithm appears to detect events competitively with those algorithms that employ such filters. Tests at Albuquerque Seismological Laboratory on data supplied by Blandford suggest performance commensurate with the on-line detector of the Seismic Data Analysis Center, Alexandria, Virginia.

  20. Object-based media and stream-based computing

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.

    1998-03-01

    Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.

  1. fastSIM: a practical implementation of fast structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Lu-Walther, Hui-Wen; Kielhorn, Martin; Förster, Ronny; Jost, Aurélie; Wicker, Kai; Heintzmann, Rainer

    2015-03-01

    A significant improvement in acquisition speed of structured illumination microscopy (SIM) opens a new field of applications to this already well-established super-resolution method towards 3D scanning real-time imaging of living cells. We demonstrate a method of increased acquisition speed on a two-beam SIM fluorescence microscope with a lateral resolution of ~100 nm at a maximum raw data acquisition rate of 162 frames per second (fps) with a region of interest of 16.5  ×  16.5 µm2, free of mechanically moving components. We use a programmable spatial light modulator (ferroelectric LCOS) which promises precise and rapid control of the excitation pattern in the sample plane. A passive Fourier filter and a segmented azimuthally patterned polarizer are used to perform structured illumination with maximum contrast. Furthermore, the free running mode in a modern sCMOS camera helps to achieve faster data acquisition.

  2. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  3. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  4. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  5. Tail autotomy affects bipedalism but not sprint performance in a cursorial Mediterranean lizard

    NASA Astrophysics Data System (ADS)

    Savvides, Pantelis; Stavrou, Maria; Pafilis, Panayiotis; Sfenthourakis, Spyros

    2017-02-01

    Running is essential in all terrestrial animals mainly for finding food and mates and escaping from predators. Lizards employ running in all their everyday functions, among which defense stands out. Besides flight, tail autotomy is another very common antipredatory strategy within most lizard families. The impact of tail loss to sprint performance seems to be species dependent. In some lizard species, tail shedding reduces sprint speed, in other species, increases it, and, in a few species, speed is not affected at all. Here, we aimed to clarify the effect of tail autotomy on the sprint performance of a cursorial lizard with particular adaptations for running, such as bipedalism and spike-like protruding scales (fringes) on the toepads that allow high speed on sandy substrates. We hypothesized that individuals that performed bipedalism, and have more and larger fringes, would achieve higher sprint performance. We also anticipated that tail shedding would affect sprint speed (though we were not able to define in what way because of the unpredictable effects that tail loss has on different species). According to our results, individuals that ran bipedally were faster; limb length and fringe size had limited effects on sprint performance whereas tail autotomy affected quadrupedal running only in females. Nonetheless, tail loss significantly affected bipedalism: the ability for running on hindlimbs was completely lost in all adult individuals and in 72.3% of juveniles.

  6. Natural Whisker-Guided Behavior by Head-Fixed Mice in Tactile Virtual Reality

    PubMed Central

    Sofroniew, Nicholas J.; Cohen, Jeremy D.; Lee, Albert K.

    2014-01-01

    During many natural behaviors the relevant sensory stimuli and motor outputs are difficult to quantify. Furthermore, the high dimensionality of the space of possible stimuli and movements compounds the problem of experimental control. Head fixation facilitates stimulus control and movement tracking, and can be combined with techniques for recording and manipulating neural activity. However, head-fixed mouse behaviors are typically trained through extensive instrumental conditioning. Here we present a whisker-based, tactile virtual reality system for head-fixed mice running on a spherical treadmill. Head-fixed mice displayed natural movements, including running and rhythmic whisking at 16 Hz. Whisking was centered on a set point that changed in concert with running so that more protracted whisking was correlated with faster running. During turning, whiskers moved in an asymmetric manner, with more retracted whisker positions in the turn direction and protracted whisker movements on the other side. Under some conditions, whisker movements were phase-coupled to strides. We simulated a virtual reality tactile corridor, consisting of two moveable walls controlled in a closed-loop by running speed and direction. Mice used their whiskers to track the walls of the winding corridor without training. Whisker curvature changes, which cause forces in the sensory follicles at the base of the whiskers, were tightly coupled to distance from the walls. Our behavioral system allows for precise control of sensorimotor variables during natural tactile navigation. PMID:25031397

  7. Tail autotomy affects bipedalism but not sprint performance in a cursorial Mediterranean lizard.

    PubMed

    Savvides, Pantelis; Stavrou, Maria; Pafilis, Panayiotis; Sfenthourakis, Spyros

    2017-02-01

    Running is essential in all terrestrial animals mainly for finding food and mates and escaping from predators. Lizards employ running in all their everyday functions, among which defense stands out. Besides flight, tail autotomy is another very common antipredatory strategy within most lizard families. The impact of tail loss to sprint performance seems to be species dependent. In some lizard species, tail shedding reduces sprint speed, in other species, increases it, and, in a few species, speed is not affected at all. Here, we aimed to clarify the effect of tail autotomy on the sprint performance of a cursorial lizard with particular adaptations for running, such as bipedalism and spike-like protruding scales (fringes) on the toepads that allow high speed on sandy substrates. We hypothesized that individuals that performed bipedalism, and have more and larger fringes, would achieve higher sprint performance. We also anticipated that tail shedding would affect sprint speed (though we were not able to define in what way because of the unpredictable effects that tail loss has on different species). According to our results, individuals that ran bipedally were faster; limb length and fringe size had limited effects on sprint performance whereas tail autotomy affected quadrupedal running only in females. Nonetheless, tail loss significantly affected bipedalism: the ability for running on hindlimbs was completely lost in all adult individuals and in 72.3% of juveniles.

  8. Fatigue crack growth in SA508-CL2 steel in a high temperature, high purity water environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, T.L.; Heald, J.D.; Kiss, E.

    1974-10-01

    Fatigue crack growth tests were conducted with 1 in. plate specimens of SA508-CL 2 steel in room temperature air, 550$sup 0$F air and in a 550$sup 0$F, high purity, water environment. Zero-tension load controlled tests were run at cyclic frequencies as low as 0.037 CPM. Results show that growth rates in the simulated Boiling Water Reactor (BWR) water environment are faster than growth rates observed in 550$sup 0$F air and these rates are faster than the room temperature rate. In the BWR water environment, lowering the cyclic frequency from 0.37 to 0.037 CPM caused only a slight increase in themore » fatigue crack growth rate. All growth rates measured in these tests were below the upper bound design curve presented in Section XI of the ASME Code. (auth)« less

  9. Effect of Acu-TENS on recovery heart rate after treadmill running exercise in subjects with normal health.

    PubMed

    Cheung, Leo Chin-Ting; Jones, Alice Yee-Men

    2007-06-01

    This study aims to investigate the effect of transcutaneous electrical nerve stimulation, applied at bilateral acupuncture points PC6 (Acu-TENS), on recovery heart rate (HR) in healthy subjects after treadmill running exercise. A single blinded, randomized controlled trial. Laboratory with healthy male subjects (n=28). Each subject participated in three separate protocols in random order. PROTOCOL A: The subject followed the Bruce protocol and ran on a treadmill until their HR reached 70% of their maximum (220-age). At this 'target' HR, the subject adopted the supine position and Acu-TENS to bilateral PC6 was commenced. PROTOCOL B: Identical to protocol A except that Acu-TENS was applied in the supine position for 45min prior to, but not after exercise. PROTOCOL C: Identical to protocol A except that placebo Acu-TENS was applied. Heart rate was recorded before and at 30s intervals after exercise until it returned to the pre-exercise baseline. The time for HR to return to baseline was compared for each protocol. Acu-TENS applied to bilateral PC6 resulted in a faster return to pre-exercise HR compared to placebo. Time required for HR to return to pre-exercise level in protocols A-C was 5.5+/-3.0; 4.8+/-3.3; 9.4+/-3.7 min, respectively (p<0.001). There was no statistical difference in HR recovery time between protocols A and B. Subjects expressed the lowest rate of perceived exertion score (RPE) at 70% maximum HR with protocol B. This study suggests that Acu-TENS applied to PC6 may facilitate HR recovery after high intensity treadmill exercise.

  10. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  11. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  12. Generalized conjugate-gradient methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1991-01-01

    A generalized conjugate-gradient method is used to solve the two-dimensional, compressible Navier-Stokes equations of fluid flow. The equations are discretized with an implicit, upwind finite-volume formulation. Preconditioning techniques are incorporated into the new solver to accelerate convergence of the overall iterative method. The superiority of the new solver is demonstrated by comparisons with a conventional line Gauss-Siedel Relaxation solver. Computational test results for transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (M = 6.0 shock-on-shock phenoena on a cylindrical leading edge) are presented. When applied to the transonic cascade case, the new solver is 4.4 times faster in terms of number of iterations and 3.1 times faster in terms of CPU time than the Relaxation solver. For the hypersonic shock case, the new solver is 3.0 times faster in terms of number of iterations and 2.2 times faster in terms of CPU time than the Relaxation solver.

  13. Morphological evolution of spiders predicted by pendulum mechanics.

    PubMed

    Moya-Laraño, Jordi; Vinković, Dejan; De Mas, Eva; Corcobado, Guadalupe; Moreno, Eulalia

    2008-03-26

    Animals have been hypothesized to benefit from pendulum mechanics during suspensory locomotion, in which the potential energy of gravity is converted into kinetic energy according to the energy-conservation principle. However, no convincing evidence has been found so far. Demonstrating that morphological evolution follows pendulum mechanics is important from a biomechanical point of view because during suspensory locomotion some morphological traits could be decoupled from gravity, thus allowing independent adaptive morphological evolution of these two traits when compared to animals that move standing on their legs; i.e., as inverted pendulums. If the evolution of body shape matches simple pendulum mechanics, animals that move suspending their bodies should evolve relatively longer legs which must confer high moving capabilities. We tested this hypothesis in spiders, a group of diverse terrestrial generalist predators in which suspensory locomotion has been lost and gained a few times independently during their evolutionary history. In spiders that hang upside-down from their webs, their legs have evolved disproportionately longer relative to their body sizes when compared to spiders that move standing on their legs. In addition, we show how disproportionately longer legs allow spiders to run faster during suspensory locomotion and how these same spiders run at a slower speed on the ground (i.e., as inverted pendulums). Finally, when suspensory spiders are induced to run on the ground, there is a clear trend in which larger suspensory spiders tend to run much more slowly than similar-size spiders that normally move as inverted pendulums (i.e., wandering spiders). Several lines of evidence support the hypothesis that spiders have evolved according to the predictions of pendulum mechanics. These findings have potentially important ecological and evolutionary implications since they could partially explain the occurrence of foraging plasticity and dispersal constraints as well as the evolution of sexual size dimorphism and sociality.

  14. Technology Insertion (TI)/Industrial Process Improvement (IPI) Task Order Number 1. Quick Fix Plan for WR-ALC, 7 RCC’s

    DTIC Science & Technology

    1989-09-25

    Orders and test specifications. Some mandatory replacement of high failure items are directed by Technical Orders to extend MTBF. Precision bearing and...Experience is very high but natural attrition is reducing the numbers faster than training is furnishing younger mechanics. Surge conditions would be...model validation run output revealed that utilization of equipment is very low and manpower is high . Based on this analysis and the brainstorming

  15. Reactive Shear Layer Mixing and Growth Rate Effects on Afterburning Properties for Axisymetric Rocket Engine Plumes

    DTIC Science & Technology

    2006-09-01

    water, carbon monoxide and carbon dioxide . The ratio of specific heats is reduced as the number of atoms in the molecule increases and as the...The flow of the jet is faster than the surrounding air, and since gas turbine engines run fuel lean, the exhaust products have generally fully reacted...previous types by several characteristics. The core of the rocket exhaust flowfield is fuel rich, and unlike gas turbine engines, which burn fuel

  16. Feasibility of Using Littoral Combat Ships (LCS) for Humanitarian Assistance/Disaster Relief (HA/DR) Operations

    DTIC Science & Technology

    2012-09-01

    when travelling at sprint speed. To help overcome the shortcomings of the LCS in conducting HA/DR operations, the Irregular Warfare (IW) mission...high sprint speed, which allows the LCS to reach the disaster region faster than any other ships, especially if the IW mission package is adopted. The...high sprint speed in excess of 40 knots and a high sustained speed to enable it to run along a 30+ knots CSG or 20+ knots ESG. The high sprint

  17. Volumetric Real-Time Imaging Using a CMUT Ring Array

    PubMed Central

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods—flash, classic phased array (CPA), and synthetic phased array (SPA)—were used in the study. For SPA imaging, two techniques to improve the image quality—Hadamard coding and aperture weighting—were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming. PMID:22718870

  18. Volumetric real-time imaging using a CMUT ring array.

    PubMed

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  19. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  20. An algorithm for computing the gene tree probability under the multispecies coalescent and its application in the inference of population tree

    PubMed Central

    2016-01-01

    Motivation: Gene tree represents the evolutionary history of gene lineages that originate from multiple related populations. Under the multispecies coalescent model, lineages may coalesce outside the species (population) boundary. Given a species tree (with branch lengths), the gene tree probability is the probability of observing a specific gene tree topology under the multispecies coalescent model. There are two existing algorithms for computing the exact gene tree probability. The first algorithm is due to Degnan and Salter, where they enumerate all the so-called coalescent histories for the given species tree and the gene tree topology. Their algorithm runs in exponential time in the number of gene lineages in general. The second algorithm is the STELLS algorithm (2012), which is usually faster but also runs in exponential time in almost all the cases. Results: In this article, we present a new algorithm, called CompactCH, for computing the exact gene tree probability. This new algorithm is based on the notion of compact coalescent histories: multiple coalescent histories are represented by a single compact coalescent history. The key advantage of our new algorithm is that it runs in polynomial time in the number of gene lineages if the number of populations is fixed to be a constant. The new algorithm is more efficient than the STELLS algorithm both in theory and in practice when the number of populations is small and there are multiple gene lineages from each population. As an application, we show that CompactCH can be applied in the inference of population tree (i.e. the population divergence history) from population haplotypes. Simulation results show that the CompactCH algorithm enables efficient and accurate inference of population trees with much more haplotypes than a previous approach. Availability: The CompactCH algorithm is implemented in the STELLS software package, which is available for download at http://www.engr.uconn.edu/ywu/STELLS.html. Contact: ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307621

  1. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  2. The aerodynamics of running socks: Reality or rhetoric?

    PubMed

    Ashford, Robert L; White, Peter; Indramohan, Vivek

    2011-12-01

    The primary objective of this study was to test the aerodynamic properties of a selection of running and general sports socks. Eleven pairs of socks were tested in a specially constructed rig which was inserted into a fully calibrated wind tunnel. Wind test speeds included 3, 4, 5, 6, 12 and 45m/s. There was no significant difference between any of the socks tested for their aerodynamic properties. The drag coefficients calculated for each sock varied proportionally with the Reynolds number. No particular sock was more aerodynamic than any of the socks tested. There is no evidence that a sock that is "aerodynamically designed" will help an athlete go faster. This may be more product rhetoric than reality, and further work is justified if such claims are being made. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. An improved non-uniformity correction algorithm and its GPU parallel implementation

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  4. Cross-Compiler for Modeling Space-Flight Systems

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  5. A Compact Synchronous Cellular Model of Nonlinear Calcium Dynamics: Simulation and FPGA Synthesis Results.

    PubMed

    Soleimani, Hamid; Drakakis, Emmanuel M

    2017-06-01

    Recent studies have demonstrated that calcium is a widespread intracellular ion that controls a wide range of temporal dynamics in the mammalian body. The simulation and validation of such studies using experimental data would benefit from a fast large scale simulation and modelling tool. This paper presents a compact and fully reconfigurable cellular calcium model capable of mimicking Hopf bifurcation phenomenon and various nonlinear responses of the biological calcium dynamics. The proposed cellular model is synthesized on a digital platform for a single unit and a network model. Hardware synthesis, physical implementation on FPGA, and theoretical analysis confirm that the proposed cellular model can mimic the biological calcium behaviors with considerably low hardware overhead. The approach has the potential to speed up large-scale simulations of slow intracellular dynamics by sharing more cellular units in real-time. To this end, various networks constructed by pipelining 10 k to 40 k cellular calcium units are compared with an equivalent simulation run on a standard PC workstation. Results show that the cellular hardware model is, on average, 83 times faster than the CPU version.

  6. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    PubMed

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  7. Production of primordial gravitational waves in a simple class of running vacuum cosmologies

    NASA Astrophysics Data System (ADS)

    Tamayo, D. A.; Lima, J. A. S.; Bessada, D. F. A.

    The problem of cosmological production of gravitational waves (GWs) is discussed in the framework of an expanding, spatially homogeneous and isotropic FRW type universe with time-evolving vacuum energy density. The GW equation is established and its modified time-dependent part is analytically resolved for different epochs in the case of a flat geometry. Unlike the standard ΛCDM cosmology (no interacting vacuum), we show that GWs are produced in the radiation era even in the context of general relativity. We also show that for all values of the free parameter, the high frequency modes are damped out even faster than in the standard cosmology both in the radiation and matter-vacuum dominated epoch. The formation of the stochastic background of gravitons and the remnant power spectrum generated at different cosmological eras are also explicitly evaluated. It is argued that measurements of the CMB polarization (B-modes) and its comparison with the rigid ΛCDM model plus the inflationary paradigm may become a crucial test for dynamical dark energy models in the near future.

  8. Performance of a parallel thermal-hydraulics code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fann, G.I.; Trent, D.S.

    The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scalingmore » performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.« less

  9. Parallel family trees for transfer matrices in the Potts model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster scenario it was in the range p ∈ [ 8 , 10 ] . Because of the parallel capabilities of the algorithm, a large-scale execution of the parallel family trees strategy in a supercomputer could contribute to the study of wider strip lattices.

  10. The LSST Scheduler from design to construction

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Reuter, Michael A.

    2016-07-01

    The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.

  11. The age-related performance decline in ultraendurance mountain biking.

    PubMed

    Haupt, Samuel; Knechtle, Beat; Knechtle, Patrizia; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2013-01-01

    The age-related changes in ultraendurance performance have been previously examined for running and triathlon but not mountain biking. The aims of this study were (i) to describe the performance trends and (ii) to analyze the age-related performance decline in ultraendurance mountain biking in a 120-km ultraendurance mountain bike race the "Swiss Bike Masters" from 1995 to 2009 in 9,325 male athletes. The mean (±SD) race time decreased from 590 ± 80 min to 529 ± 88 min for overall finishers and from 415 ± 8 min to 359 ± 16 min for the top 10 finishers, respectively. The mean (±SD) age of all finishers significantly (P < 0.001) increased from 31.6 ± 6.5 years to 37.9 ± 8.9 years, while the age of the top 10 remained stable at 30.0 ± 1.6 years. The race time of mountain bikers aged between 25 and 34 years was significantly (P < 0.01) faster compared with the race time of older age groups. The age-related decline in performance in endurance mountain bikers in the "Swiss Bike Masters" appears to start earlier compared with other ultraendurance sports.

  12. Lower body symmetry and running performance in elite Jamaican track and field athletes.

    PubMed

    Trivers, Robert; Fink, Bernhard; Russell, Mark; McCarty, Kristofor; James, Bruce; Palestis, Brian G

    2014-01-01

    In a study of degree of lower body symmetry in 73 elite Jamaican track and field athletes we show that both their knees and ankles (but not their feet) are-on average-significantly more symmetrical than those of 116 similarly aged controls from the rural Jamaican countryside. Within the elite athletes, events ranged from the 100 to the 800 m, and knee and ankle asymmetry was lower for those running the 100 m dashes than those running the longer events with turns. Nevertheless, across all events those with more symmetrical knees and ankles (but not feet) had better results compared to international standards. Regression models considering lower body symmetry combined with gender, age and weight explain 27 to 28% of the variation in performance among athletes, with symmetry related to about 5% of this variation. Within 100 m sprinters, the results suggest that those with more symmetrical knees and ankles ran faster. Altogether, our work confirms earlier findings that knee and probably ankle symmetry are positively associated with sprinting performance, while extending these findings to elite athletes.

  13. Lower Body Symmetry and Running Performance in Elite Jamaican Track and Field Athletes

    PubMed Central

    Trivers, Robert; Fink, Bernhard; Russell, Mark; McCarty, Kristofor; James, Bruce; Palestis, Brian G.

    2014-01-01

    In a study of degree of lower body symmetry in 73 elite Jamaican track and field athletes we show that both their knees and ankles (but not their feet) are–on average–significantly more symmetrical than those of 116 similarly aged controls from the rural Jamaican countryside. Within the elite athletes, events ranged from the 100 to the 800 m, and knee and ankle asymmetry was lower for those running the 100 m dashes than those running the longer events with turns. Nevertheless, across all events those with more symmetrical knees and ankles (but not feet) had better results compared to international standards. Regression models considering lower body symmetry combined with gender, age and weight explain 27 to 28% of the variation in performance among athletes, with symmetry related to about 5% of this variation. Within 100 m sprinters, the results suggest that those with more symmetrical knees and ankles ran faster. Altogether, our work confirms earlier findings that knee and probably ankle symmetry are positively associated with sprinting performance, while extending these findings to elite athletes. PMID:25401732

  14. Energetic cost of locomotion on different equine treadmills.

    PubMed

    Jones, J H; Ohmura, H; Stanley, S D; Hiraga, A

    2006-08-01

    Human athletes run faster and experience fewer injuries when running on surfaces with a stiffness 'tuned' to their bodies. We questioned if the same might be true for horses, and if so, would running on surfaces of different stiffness cause a measurable change in the amount of energy required to move at a given speed? Different brands of commercial treadmills have pans of unequal stiffness, and this difference would result in different metabolic power requirements to locomote at a given speed. We tested for differences in stiffness between a Mustang 2200 and a Säto I commercial treadmill by incrementally loading each treadmill near the centre of the pan with fixed weights and measuring the displacement of the pan as weights were added or removed from the pan. We trained six 3-year-old Thoroughbreds to run on the 2 treadmills. After 4 months the horses ran with reproducible specific maximum rates of O2 consumption (VO2max/kg bwt, 2.62 +/- 0.23 (s.d.) mlO2 STPD/sec/kg) at 14.2 +/- 0.7 (s.d.) m/sec. They were alternately run on the 2 treadmills at identical grade (0.40 +/- 0.02%) and speeds (1.83 (walk), 4.0 (trot) and 8.0 (canter) m/sec, all +/- 0.03 m/sec) while wearing an open-flow mask for measurement of VO2. The Mustang treadmill was over 6 times stiffer than the Säto. The VO2/kg bwt increased by approximately 4-fold over the range of speeds studied on both treadmills. Oxygen consumption was significantly lower at all speeds for the Mustang treadmill compared to the Säto. The fractional difference in energy cost decreased by a factor of 6 with increasing speed, although absolute difference in cost was relatively constant. We suggest it costs less energy for horses to walk, trot or canter on a stiffer treadmill than on a more compliant treadmill, at least within the ranges of stiffness evaluated. It may be possible to define a substrate stiffness 'tuned' to a horse's body enabling maximal energetic economy when running. The differences between treadmills allows more accurate comparisons between physiological studies conducted on treadmills of different stiffness, and might help to identify an ideal track stiffness to reduce locomotor injuries in equine athletes.

  15. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  16. Isometric pre-conditioning blunts exercise-induced muscle damage but does not attenuate changes in running economy following downhill running.

    PubMed

    Lima, Leonardo C R; Bassan, Natália M; Cardozo, Adalgiso C; Gonçalves, Mauro; Greco, Camila C; Denadai, Benedito S

    2018-05-08

    Running economy (RE) is impaired following unaccustomed eccentric-biased exercises that induce muscle damage. It is also known that muscle damage is reduced when maximal voluntary isometric contractions (MVIC) are performed at a long muscle length 2-4 days prior to maximal eccentric exercise with the same muscle, a phenomenon that can be described as isometric pre-conditioning (IPC). We tested the hypothesis that IPC could attenuate muscle damage and changes in RE following downhill running. Thirty untrained men were randomly assigned into experimental or control groups and ran downhill on a treadmill (-15%) for 30 min. Participants in the experimental group completed 10 MVIC in a leg press machine two days prior to downhill running, while participants in the control group did not perform IPC. The magnitude of changes in muscle soreness determined 48 h after downhill running was greater for the control group (122 ± 28 mm) than for the experimental group (92 ± 38 mm). Isometric peak torque recovered faster in the experimental group compared with the control group (3 days vs. no full recovery, respectively). No significant effect of IPC was found for countermovement jump height, serum creatine kinase activity or any parameters associated with RE. These results supported the hypothesis that IPC attenuates changes in markers of muscle damage. The hypothesis that IPC attenuates changes in RE was not supported by our data. It appears that the mechanisms involved in changes in markers of muscle damage and parameters associated with RE following downhill running are not completely shared. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Do running speed and shoe cushioning influence impact loading and tibial shock in basketball players?

    PubMed Central

    Liebenberg, Jacobus; Woo, Jeonghyun; Park, Sang-Kyoon; Yoon, Suk-Hoon; Cheung, Roy Tsz-Hei; Ryu, Jiseon

    2018-01-01

    Background Tibial stress fracture (TSF) is a common injury in basketball players. This condition has been associated with high tibial shock and impact loading, which can be affected by running speed, footwear condition, and footstrike pattern. However, these relationships were established in runners but not in basketball players, with very little research done on impact loading and speed. Hence, this study compared tibial shock, impact loading, and foot strike pattern in basketball players running at different speeds with different shoe cushioning properties/performances. Methods Eighteen male collegiate basketball players performed straight running trials with different shoe cushioning (regular-, better-, and best-cushioning) and running speed conditions (3.0 m/s vs. 6.0 m/s) on a flat instrumented runway. Tri-axial accelerometer, force plate and motion capture system were used to determine tibial accelerations, vertical ground reaction forces and footstrike patterns in each condition, respectively. Comfort perception was indicated on a 150 mm Visual Analogue Scale. A 2 (speed) × 3 (footwear) repeated measures ANOVA was used to examine the main effects of shoe cushioning and running speeds. Results Greater tibial shock (P < 0.001; η2 = 0.80) and impact loading (P < 0.001; η2 = 0.73–0.87) were experienced at faster running speeds. Interestingly, shoes with regular-cushioning or best-cushioning resulted in greater tibial shock (P = 0.03; η2 = 0.39) and impact loading (P = 0.03; η2 = 0.38–0.68) than shoes with better-cushioning. Basketball players continued using a rearfoot strike during running, regardless of running speed and footwear cushioning conditions (P > 0.14; η2 = 0.13). Discussion There may be an optimal band of shoe cushioning for better protection against TSF. These findings may provide insights to formulate rehabilitation protocols for basketball players who are recovering from TSF. PMID:29770274

  18. Air Gaps, Size Effect, and Corner-Turning in Ambient LX-17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P C; Hernandez, A; Cabacungen, C

    2007-05-30

    Various ambient measurements are presented for LX-17. The size (diameter) effect has been measured with copper and Lucite confinement, where the failure radii are 4.0 and 6.5 mm, respectively. The air well corner-turn has been measured with an LX-07 booster, and the dead-zone results are comparable to the previous TATB-boosted work. Four double cylinders have been fired, and dead zones appear in all cases. The steel-backed samples are faster than the Lucite-backed samples by 0.6 {micro}s. Bare LX-07 and LX-17 of 12.7 mm-radius were fired with air gaps. Long acceptor regions were used to truly determine if detonation occurred ormore » not. The LX-07 crossed at 10 mm with a slight time delay. Steady state LX-17 crossed at 3.5 mm gap but failed to cross at 4.0 mm. LX-17 with a 12.7 mm run after the booster crossed a 1.5 mm gap but failed to cross 2.5 mm. Timing delays were measured where the detonation crossed the gaps. The Tarantula model is introduced as embedded in the Linked Cheetah V4.0 reactive flow code at 4 zones/mm. Tarantula has four pressure regions: off, initiation, failure and detonation. A report card of 25 tests run with the same settings on LX-17 is shown, possibly the most extensive simultaneous calibration yet tried with an explosive. The physical basis of some of the input parameters is considered.« less

  19. Efficient Helicopter Aerodynamic and Aeroacoustic Predictions on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper presents parallel implementations of two codes used in a combined CFD/Kirchhoff methodology to predict the aerodynamics and aeroacoustics properties of helicopters. The rotorcraft Navier-Stokes code, TURNS, computes the aerodynamic flowfield near the helicopter blades and the Kirchhoff acoustics code computes the noise in the far field, using the TURNS solution as input. The overall parallel strategy adds MPI message passing calls to the existing serial codes to allow for communication between processors. As a result, the total code modifications required for parallel execution are relatively small. The biggest bottleneck in running the TURNS code in parallel comes from the LU-SGS algorithm that solves the implicit system of equations. We use a new hybrid domain decomposition implementation of LU-SGS to obtain good parallel performance on the SP-2. TURNS demonstrates excellent parallel speedups for quasi-steady and unsteady three-dimensional calculations of a helicopter blade in forward flight. The execution rate attained by the code on 114 processors is six times faster than the same cases run on one processor of the Cray C-90. The parallel Kirchhoff code also shows excellent parallel speedups and fast execution rates. As a performance demonstration, unsteady acoustic pressures are computed at 1886 far-field observer locations for a sample acoustics problem. The calculation requires over two hundred hours of CPU time on one C-90 processor but takes only a few hours on 80 processors of the SP2. The resultant far-field acoustic field is analyzed with state of-the-art audio and video rendering of the propagating acoustic signals.

  20. Elite sprinting: are athletes individually step-frequency or step-length reliant?

    PubMed

    Salo, Aki I T; Bezodis, Ian N; Batterham, Alan M; Kerwin, David G

    2011-06-01

    The aim of this study was to investigate the step characteristics among the very best 100-m sprinters in the world to understand whether the elite athletes are individually more reliant on step frequency (SF) or step length (SL). A total of 52 male elite-level 100-m races were recorded from publicly available television broadcasts, with 11 analyzed athletes performing in 10 or more races. For each run of each athlete, the average SF and SL over the whole 100-m distance was analyzed. To determine any SF or SL reliance for an individual athlete, the 90% confidence interval (CI) for the difference between the SF-time versus SL-time relationships was derived using a criterion nonparametric bootstrapping technique. Athletes performed these races with various combinations of SF and SL reliance. Athlete A10 yielded the highest positive CI difference (SL reliance), with a value of 1.05 (CI = 0.50-1.53). The largest negative difference (SF reliance) occurred for athlete A11 as -0.60, with the CI range of -1.20 to 0.03. Previous studies have generally identified only one of these variables to be the main reason for faster running velocities. However, this study showed that there is a large variation of performance patterns among the elite athletes and, overall, SF or SL reliance is a highly individual occurrence. It is proposed that athletes should take this reliance into account in their training, with SF-reliant athletes needing to keep their neural system ready for fast leg turnover and SL-reliant athletes requiring more concentration on maintaining strength levels.

  1. Prescribed and self-reported seasonal training of distance runners.

    PubMed

    Hewson, D J; Hopkins, W G

    1995-12-01

    A survey of 123 distance-running coaches and their best runners was undertaken to describe prescribed seasonal training and its relationship to the performance and self-reported training of the runners. The runners were 43 females and 80 males, aged 24 +/- 8 years (mean +/- S.D.), training for events from 800 m to the marathon, with seasonal best paces of 86 +/- 6% of sex- and age-group world records. The coaches and runners completed a questionnaire on typical weekly volumes of interval and strength training, and typical weekly volumes and paces of moderate and hard continuous running, for build-up, pre-competition, competition and post-competition phases of a season. Prescribed training decreased in volume and increased in intensity from the build-up through to the competition phase, and had similarities with 'long slow distance' training. Coaches of the faster runners prescribed longer build-ups, greater volumes of moderate continuous running and slower relative paces of continuous running (r = 0.19-0.36, P < 0.05), suggesting beneficial effects of not training close to competition pace. The mean training volumes and paces prescribed by the coaches were similar to those reported by the runners, but the correlations between prescribed and reported training were poor (r = 0.2-0.6). Coaches may therefore need to monitor their runners' training more closely.

  2. MODFLOW-OWHM v2: The next generation of fully integrated hydrologic simulation software

    NASA Astrophysics Data System (ADS)

    Boyce, S. E.; Hanson, R. T.; Ferguson, I. M.; Reimann, T.; Henson, W.; Mehl, S.; Leake, S.; Maddock, T.

    2016-12-01

    The One-Water Hydrologic Flow Model (One-Water) is a MODFLOW-based integrated hydrologic flow model designed for the analysis of a broad range of conjunctive-use and climate-related issues. One-Water fully links the movement and use of groundwater, surface water, and imported water for consumption by agriculture and natural vegetation on the landscape, and for potable and other uses within a supply-and-demand framework. One-Water includes linkages for deformation-, flow-, and head-dependent flows; additional observation and parameter options for higher-order calibrations; and redesigned code for facilitation of self-updating models and faster simulation run times. The next version of One-Water, currently under development, will include a new surface-water operations module that simulates dynamic reservoir operations, a new sustainability analysis package that facilitates the estimation and simulation of reduced storage depletion and captured discharge, a conduit-flow process for karst aquifers and leaky pipe networks, a soil zone process that adds an enhanced infiltration process, interflow, deep percolation and soil moisture, and a new subsidence and aquifer compaction package. It will also include enhancements to local grid refinement, and additional features to facilitate easier model updates, faster execution, better error messages, and more integration/cross communication between the traditional MODFLOW packages. By retaining and tracking the water within the hydrosphere, One-Water accounts for "all of the water everywhere and all of the time." This philosophy provides more confidence in the water accounting by the scientific community and provides the public a foundation needed to address wider classes of problems. Ultimately, more complex questions are being asked about water resources, so they require a more complete answer about conjunctive-use and climate-related issues.

  3. biobambam: tools for read pair collation based algorithms on BAM files

    PubMed Central

    2014-01-01

    Background Sequence alignment data is often ordered by coordinate (id of the reference sequence plus position on the sequence where the fragment was mapped) when stored in BAM files, as this simplifies the extraction of variants between the mapped data and the reference or of variants within the mapped data. In this order paired reads are usually separated in the file, which complicates some other applications like duplicate marking or conversion to the FastQ format which require to access the full information of the pairs. Results In this paper we introduce biobambam, a set of tools based on the efficient collation of alignments in BAM files by read name. The employed collation algorithm avoids time and space consuming sorting of alignments by read name where this is possible without using more than a specified amount of main memory. Using this algorithm tasks like duplicate marking in BAM files and conversion of BAM files to the FastQ format can be performed very efficiently with limited resources. We also make the collation algorithm available in the form of an API for other projects. This API is part of the libmaus package. Conclusions In comparison with previous approaches to problems involving the collation of alignments by read name like the BAM to FastQ or duplication marking utilities our approach can often perform an equivalent task more efficiently in terms of the required main memory and run-time. Our BAM to FastQ conversion is faster than all widely known alternatives including Picard and bamUtil. Our duplicate marking is about as fast as the closest competitor bamUtil for small data sets and faster than all known alternatives on large and complex data sets.

  4. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading.

    PubMed

    Rahn, René; Budach, Stefan; Costanza, Pascal; Ehrhardt, Marcel; Hancox, Jonny; Reinert, Knut

    2018-05-03

    Pairwise sequence alignment is undoubtedly a central tool in many bioinformatics analyses. In this paper, we present a generically accelerated module for pairwise sequence alignments applicable for a broad range of applications. In our module, we unified the standard dynamic programming kernel used for pairwise sequence alignments and extended it with a generalized inter-sequence vectorization layout, such that many alignments can be computed simultaneously by exploiting SIMD (Single Instruction Multiple Data) instructions of modern processors. We then extended the module by adding two layers of thread-level parallelization, where we a) distribute many independent alignments on multiple threads and b) inherently parallelize a single alignment computation using a work stealing approach producing a dynamic wavefront progressing along the minor diagonal. We evaluated our alignment vectorization and parallelization on different processors, including the newest Intel® Xeon® (Skylake) and Intel® Xeon Phi™ (KNL) processors, and use cases. The instruction set AVX512-BW (Byte and Word), available on Skylake processors, can genuinely improve the performance of vectorized alignments. We could run single alignments 1600 times faster on the Xeon Phi™ and 1400 times faster on the Xeon® than executing them with our previous sequential alignment module. The module is programmed in C++ using the SeqAn (Reinert et al., 2017) library and distributed with version 2.4. under the BSD license. We support SSE4, AVX2, AVX512 instructions and included UME::SIMD, a SIMD-instruction wrapper library, to extend our module for further instruction sets. We thoroughly test all alignment components with all major C++ compilers on various platforms. rene.rahn@fu-berlin.de.

  5. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  6. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  7. Foot strike patterns of recreational and sub-elite runners in a long-distance road race.

    PubMed

    Larson, Peter; Higgins, Erin; Kaminski, Justin; Decker, Tamara; Preble, Janine; Lyons, Daniela; McIntyre, Kevin; Normile, Adam

    2011-12-01

    Although the biomechanical properties of the various types of running foot strike (rearfoot, midfoot, and forefoot) have been studied extensively in the laboratory, only a few studies have attempted to quantify the frequency of running foot strike variants among runners in competitive road races. We classified the left and right foot strike patterns of 936 distance runners, most of whom would be considered of recreational or sub-elite ability, at the 10 km point of a half-marathon/marathon road race. We classified 88.9% of runners at the 10 km point as rearfoot strikers, 3.4% as midfoot strikers, 1.8% as forefoot strikers, and 5.9% of runners exhibited discrete foot strike asymmetry. Rearfoot striking was more common among our sample of mostly recreational distance runners than has been previously reported for samples of faster runners. We also compared foot strike patterns of 286 individual marathon runners between the 10 km and 32 km race locations and observed increased frequency of rearfoot striking at 32 km. A large percentage of runners switched from midfoot and forefoot foot strikes at 10 km to rearfoot strikes at 32 km. The frequency of discrete foot strike asymmetry declined from the 10 km to the 32 km location. Among marathon runners, we found no significant relationship between foot strike patterns and race times.

  8. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  9. Multi-level emulation of a volcanic ash transport and dispersion model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen

    2018-01-01

    Following the disruption to European airspace caused by the eruption of Eyjafjallajökull in 2010 there has been a move towards producing quantitative predictions of volcanic ash concentration using volcanic ash transport and dispersion simulators. However, there is no formal framework for determining the uncertainties of these predictions and performing many simulations using these complex models is computationally expensive. In this paper a Bayesian linear emulation approach is applied to the Numerical Atmospheric-dispersion Modelling Environment (NAME) to better understand the influence of source and internal model parameters on the simulator output. Emulation is a statistical method for predicting the output of a computer simulator at new parameter choices without actually running the simulator. A multi-level emulation approach is applied using two configurations of NAME with different numbers of model particles. Information from many evaluations of the computationally faster configuration is combined with results from relatively few evaluations of the slower, more accurate, configuration. This approach is effective when it is not possible to run the accurate simulator many times and when there is also little prior knowledge about the influence of parameters. The approach is applied to the mean ash column loading in 75 geographical regions on 14 May 2010. Through this analysis it has been found that the parameters that contribute the most to the output uncertainty are initial plume rise height, mass eruption rate, free tropospheric turbulence levels and precipitation threshold for wet deposition. This information can be used to inform future model development and observational campaigns and routine monitoring. The analysis presented here suggests the need for further observational and theoretical research into parameterisation of atmospheric turbulence. Furthermore it can also be used to inform the most important parameter perturbations for a small operational ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.

  10. Branch-Based Centralized Data Collection for Smart Grids Using Wireless Sensor Networks

    PubMed Central

    Kim, Kwangsoo; Jin, Seong-il

    2015-01-01

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method. PMID:26007734

  11. Branch-based centralized data collection for smart grids using wireless sensor networks.

    PubMed

    Kim, Kwangsoo; Jin, Seong-il

    2015-05-21

    A smart grid is one of the most important applications in smart cities. In a smart grid, a smart meter acts as a sensor node in a sensor network, and a central device collects power usage from every smart meter. This paper focuses on a centralized data collection problem of how to collect every power usage from every meter without collisions in an environment in which the time synchronization among smart meters is not guaranteed. To solve the problem, we divide a tree that a sensor network constructs into several branches. A conflict-free query schedule is generated based on the branches. Each power usage is collected according to the schedule. The proposed method has important features: shortening query processing time and avoiding collisions between a query and query responses. We evaluate this method using the ns-2 simulator. The experimental results show that this method can achieve both collision avoidance and fast query processing at the same time. The success rate of data collection at a sink node executing this method is 100%. Its running time is about 35 percent faster than that of the round-robin method, and its memory size is reduced to about 10% of that of the depth-first search method.

  12. High-Performance Integrated Control of water quality and quantity in urban water reservoirs

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.; Goedbloed, A.

    2015-11-01

    This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).

  13. Vibrational population relaxation of carbon monoxide in the heme pocket of photolyzed carbonmonoxy myoglobin: Comparison of time-resolved mid-IR absorbance experiments and molecular dynamics simulations

    PubMed Central

    Sagnella, Diane E.; Straub, John E.; Jackson, Timothy A.; Lim, Manho; Anfinrud, Philip A.

    1999-01-01

    The vibrational energy relaxation of carbon monoxide in the heme pocket of sperm whale myoglobin was studied by using molecular dynamics simulation and normal mode analysis methods. Molecular dynamics trajectories of solvated myoglobin were run at 300 K for both the δ- and ɛ-tautomers of the distal His-64. Vibrational population relaxation times of 335 ± 115 ps for the δ-tautomer and 640 ± 185 ps for the ɛ-tautomer were estimated by using the Landau–Teller model. Normal mode analysis was used to identify those protein residues that act as the primary “doorway” modes in the vibrational relaxation of the oscillator. Although the CO relaxation rates in both the ɛ- and δ-tautomers are similar in magnitude, the simulations predict that the vibrational relaxation of the CO is faster in the δ-tautomer with the distal His playing an important role in the energy relaxation mechanism. Time-resolved mid-IR absorbance measurements were performed on photolyzed carbonmonoxy hemoglobin (Hb13CO). From these measurements, a T1 time of 600 ± 150 ps was determined. The simulation and experimental estimates are compared and discussed. PMID:10588704

  14. Biomechanical characteristics of skeletal muscles and associations between running speed and contraction time in 8- to 13-year-old children.

    PubMed

    Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena

    2017-02-01

    Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.

  15. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental tool in numerous image processing and remote sensing applications. For example, unsupervised clustering is often used to obtain vegetation maps of an area of interest. This approach is useful when reliable training data are either scarce or expensive, and when relatively little a priori information about the data is available. Unsupervised clustering methods play a significant role in the pursuit of unsupervised classification. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points (or samples) in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute a set of cluster centers in d-space. Although there is no specific optimization criterion, the algorithm is similar in spirit to the well known k-means clustering method in which the objective is to minimize the average squared distance of each point to its nearest center, called the average distortion. One significant feature of ISOCLUS over k-means is that clusters may be merged or split, and so the final number of clusters may be different from the number k supplied as part of the input. This algorithm will be described in later in this paper. The ISOCLUS algorithm can run very slowly, particularly on large data sets. Given its wide use in remote sensing, its efficient computation is an important goal. We have developed a fast implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm, the filtering algorithm, by Kanungo et al.. They showed that, by storing the data in a kd-tree, it was possible to significantly reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm. For technical reasons, which are explained later, it is necessary to make a minor modification to the ISOCLUS specification. We provide empirical evidence, on both synthetic and Landsat image data sets, that our algorithm's performance is essentially the same as that of ISOCLUS, but with significantly lower running times. We show that our algorithm runs from 3 to 30 times faster than a straightforward implementation of ISOCLUS. Our adaptation of the filtering algorithm involves the efficient computation of a number of cluster statistics that are needed for ISOCLUS, but not for k-means.

  16. Elastic coupling of limb joints enables faster bipedal walking

    PubMed Central

    Dean, J.C.; Kuo, A.D.

    2008-01-01

    The passive dynamics of bipedal limbs alone are sufficient to produce a walking motion, without need for control. Humans augment these dynamics with muscles, actively coordinated to produce stable and economical walking. Present robots using passive dynamics walk much slower, perhaps because they lack elastic muscles that couple the joints. Elastic properties are well known to enhance running gaits, but their effect on walking has yet to be explored. Here we use a computational model of dynamic walking to show that elastic joint coupling can help to coordinate faster walking. In walking powered by trailing leg push-off, the model's speed is normally limited by a swing leg that moves too slowly to avoid stumbling. A uni-articular spring about the knee allows faster but uneconomical walking. A combination of uni-articular hip and knee springs can speed the legs for improved speed and economy, but not without the swing foot scuffing the ground. Bi-articular springs coupling the hips and knees can yield high economy and good ground clearance similar to humans. An important parameter is the knee-to-hip moment arm that greatly affects the existence and stability of gaits, and when selected appropriately can allow for a wide range of speeds. Elastic joint coupling may contribute to the economy and stability of human gait. PMID:18957360

  17. Design and Operation of a Fast, Thin-Film Thermocouple Probe on a Turbine Engine

    NASA Technical Reports Server (NTRS)

    Meredith, Roger D.; Wrbanek, John D.; Fralick, Gustave C.; Greer, Lawrence C., III; Hunter, Gary W.; Chen, Liang-Yu

    2014-01-01

    As a demonstration of technology maturation, a thin-film temperature sensor probe was fabricated and installed on a F117 turbofan engine via a borescope access port to monitor the temperature experienced in the bleed air passage of the compressor area during an engine checkout test run. To withstand the harsh conditions experienced in this environment, the sensor probe was built from high temperature materials. The thin-film thermocouple sensing elements were deposited by physical vapor deposition using pure metal elements, thus avoiding the inconsistencies of sputter-depositing particular percentages of materials to form standardized alloys commonly found in thermocouples. The sensor probe and assembly were subjected to a strict protocol of multi-axis vibrational testing as well as elevated temperature pressure testing to be qualified for this application. The thin-film thermocouple probe demonstrated a faster response than a traditional embedded thermocouple during the engine checkout run.

  18. Theoretical considerations on maximum running speeds for large and small animals.

    PubMed

    Fuentes, Mauricio A

    2016-02-07

    Mechanical equations for fast running speeds are presented and analyzed. One of the equations and its associated model predict that animals tend to experience larger mechanical stresses in their limbs (muscles, tendons and bones) as a result of larger stride lengths, suggesting a structural restriction entailing the existence of an absolute maximum possible stride length. The consequence for big animals is that an increasingly larger body mass implies decreasing maximal speeds, given that the stride frequency generally decreases for increasingly larger animals. Another restriction, acting on small animals, is discussed only in preliminary terms, but it seems safe to assume from previous studies that for a given range of body masses of small animals, those which are bigger are faster. The difference between speed scaling trends for large and small animals implies the existence of a range of intermediate body masses corresponding to the fastest animals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Self-running and self-floating two-dimensional actuator using near-field acoustic levitation

    NASA Astrophysics Data System (ADS)

    Chen, Keyu; Gao, Shiming; Pan, Yayue; Guo, Ping

    2016-09-01

    Non-contact actuators are promising technologies in metrology, machine-tools, and hovercars, but have been suffering from low energy efficiency, complex design, and low controllability. Here we report a new design of a self-running and self-floating actuator capable of two-dimensional motion with an unlimited travel range. The proposed design exploits near-field acoustic levitation for heavy object lifting, and coupled resonant vibration for generation of acoustic streaming for non-contact motion in designated directions. The device utilizes resonant vibration of the structure for high energy efficiency, and adopts a single piezo element to achieve both levitation and non-contact motion for a compact and simple design. Experiments demonstrate that the proposed actuator can reach a 1.65 cm/s or faster moving speed and is capable of transporting a total weight of 80 g under 1.2 W power consumption.

  20. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  1. Bayesian Model Selection under Time Constraints

    NASA Astrophysics Data System (ADS)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  2. How gender and task difficulty affect a sport-protective response in young adults

    PubMed Central

    Lipps, David B.; Eckner, James T.; Richardson, James K.; Ashton-Miller, James A.

    2013-01-01

    We tested the hypotheses that gender and task difficulty affect the reaction, movement, and total response times associated with performing a head protective response. Twenty-four healthy young adults (13 females) performed a protective response of raising their hands from waist level to block a foam ball fired at their head from an air cannon. Participants initially stood 8.25 m away from the cannon (‘low difficulty’), and were moved successively closer in 60 cm increments until they failed to block at least 5 of 8 balls (‘high difficulty’). Limb motion was quantified using optoelectronic markers on the participants’ left wrist. Males had significantly faster total response times (p = 0.042), a trend towards faster movement times (p = 0.054), and faster peak wrist velocity (p < .001) and acceleration (p = 0.032) than females. Reaction time, movement time, and total response time were significantly faster under high difficulty conditions for both genders (p < .001). This study suggests that baseball and softball pitchers and fielders should have sufficient time to protect their head from a batted ball under optimal conditions if they are adequately prepared for the task. PMID:23234296

  3. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952

  4. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.

  5. Adjustments with running speed reveal neuromuscular adaptations during landing associated with high mileage running training.

    PubMed

    Verheul, Jasper; Clansey, Adam C; Lake, Mark J

    2017-03-01

    It remains to be determined whether running training influences the amplitude of lower limb muscle activations before and during the first half of stance and whether such changes are associated with joint stiffness regulation and usage of stored energy from tendons. Therefore, the aim of this study was to investigate neuromuscular and movement adaptations before and during landing in response to running training across a range of speeds. Two groups of high mileage (HM; >45 km/wk, n = 13) and low mileage (LM; <15 km/wk, n = 13) runners ran at four speeds (2.5-5.5 m/s) while lower limb mechanics and electromyography of the thigh muscles were collected. There were few differences in prelanding activation levels, but HM runners displayed lower activations of the rectus femoris, vastus medialis, and semitendinosus muscles postlanding, and these differences increased with running speed. HM runners also demonstrated higher initial knee stiffness during the impact phase compared with LM runners, which was associated with an earlier peak knee flexion velocity, and both were relatively unchanged by running speed. In contrast, LM runners had higher knee stiffness during the slightly later weight acceptance phase and the disparity was amplified with increases in speed. It was concluded that initial knee joint stiffness might predominantly be governed by tendon stiffness rather than muscular activations before landing. Estimated elastic work about the ankle was found to be higher in the HM runners, which might play a role in reducing weight acceptance phase muscle activation levels and improve muscle activation efficiency with running training. NEW & NOTEWORTHY Although neuromuscular factors play a key role during running, the influence of high mileage training on neuromuscular function has been poorly studied, especially in relation to running speed. This study is the first to demonstrate changes in neuromuscular conditioning with high mileage training, mainly characterized by lower thigh muscle activation after touch down, higher initial knee stiffness, and greater estimates of energy return, with adaptations being increasingly evident at faster running speeds. Copyright © 2017 the American Physiological Society.

  6. Strategies for Walking on a Laterally Oscillating Treadmill

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Bloomberg, Jacob, J.

    2008-01-01

    Most people use a variety of gait patterns each day. These changes can come about by voluntary actions, such as a decision to walk faster when running late. They can also be a result of both conscious and subconscious changes made to account for variation in the environmental conditions. Many factors can play a role in determining the optimal gait patterns, but the relative importance of each could vary between subjects. A goal of this study was to investigate whether subjects used consistent gait strategies when walking on an unstable support surface.

  7. Effect of Pseudomonas sp. P7014 on the growth of edible mushroom Pleurotus eryngii in bottle culture for commercial production.

    PubMed

    Kim, Min Keun; Math, Renukaradhya K; Cho, Kye Man; Shin, Ki Jae; Kim, Jong Ok; Ryu, Jae San; Lee, Young Han; Yun, Han Dae

    2008-05-01

    Addition of bacterial culture strain P7014 and its supernatant to the mushroom growing media resulted in mushroom mycelia run faster. Mycelial growth rate of Pleurotus eryngii was increased up to 1.6 fold and primordial formation was induced one day earlier. Moreover, it was supposed that addition of bacteria had beneficial applications for commercial mushroom production, which appreciably reduced total number of days for cultivation of about 5+/-2 days compared with uninoculated, which took 55+/-2 days.

  8. Wheat gliadin: digital imaging and database construction using a 4-band reference system of agarose isoelectric focusing patterns.

    PubMed

    Black, J A; Waggamon, K A

    1992-01-01

    An isoelectric focusing method using thin-layer agarose gel has been developed for wheat gliadin. Using flat-bed units with a third electrode, up to 72 samples per gel may be analyzed. Advantages over traditional acid polyacrylamide gel electrophoresis methodology include: faster run times, nontoxic media, and greater sample capacity. The method is suitable for fingerprinting or purity testing of wheat varieties. Using digital images captured by a flat-bed scanner, a 4-band reference system using isoelectric points was devised. Software enables separated bands to be assigned pI values based upon reference tracks. Precision of assigned isoelectric points is shown to be on the order of 0.02 pH units. Captured images may be stored in a computer database and compared to unknown patterns to enable an identification. Parameters for a match with a stored pattern may be adjusted for pI interval required for a match, and number of best matches.

  9. Bond Graph Model of Cerebral Circulation: Toward Clinically Feasible Systemic Blood Flow Simulations

    PubMed Central

    Safaei, Soroush; Blanco, Pablo J.; Müller, Lucas O.; Hellevik, Leif R.; Hunter, Peter J.

    2018-01-01

    We propose a detailed CellML model of the human cerebral circulation that runs faster than real time on a desktop computer and is designed for use in clinical settings when the speed of response is important. A lumped parameter mathematical model, which is based on a one-dimensional formulation of the flow of an incompressible fluid in distensible vessels, is constructed using a bond graph formulation to ensure mass conservation and energy conservation. The model includes arterial vessels with geometric and anatomical data based on the ADAN circulation model. The peripheral beds are represented by lumped parameter compartments. We compare the hemodynamics predicted by the bond graph formulation of the cerebral circulation with that given by a classical one-dimensional Navier-Stokes model working on top of the whole-body ADAN model. Outputs from the bond graph model, including the pressure and flow signatures and blood volumes, are compared with physiological data. PMID:29551979

  10. Defiant: (DMRs: easy, fast, identification and ANnoTation) identifies differentially Methylated regions from iron-deficient rat hippocampus.

    PubMed

    Condon, David E; Tran, Phu V; Lien, Yu-Chin; Schug, Jonathan; Georgieff, Michael K; Simmons, Rebecca A; Won, Kyoung-Jae

    2018-02-05

    Identification of differentially methylated regions (DMRs) is the initial step towards the study of DNA methylation-mediated gene regulation. Previous approaches to call DMRs suffer from false prediction, use extreme resources, and/or require library installation and input conversion. We developed a new approach called Defiant to identify DMRs. Employing Weighted Welch Expansion (WWE), Defiant showed superior performance to other predictors in the series of benchmarking tests on artificial and real data. Defiant was subsequently used to investigate DNA methylation changes in iron-deficient rat hippocampus. Defiant identified DMRs close to genes associated with neuronal development and plasticity, which were not identified by its competitor. Importantly, Defiant runs between 5 to 479 times faster than currently available software packages. Also, Defiant accepts 10 different input formats widely used for DNA methylation data. Defiant effectively identifies DMRs for whole-genome bisulfite sequencing (WGBS), reduced-representation bisulfite sequencing (RRBS), Tet-assisted bisulfite sequencing (TAB-seq), and HpaII tiny fragment enrichment by ligation-mediated PCR-tag (HELP) assays.

  11. Evaluation of anthropometric, physiological, and skill-related tests for talent identification in female field hockey.

    PubMed

    Keogh, Justin W L; Weber, Clare L; Dalton, Carl T

    2003-06-01

    The purpose of the present study was to develop an effective testing battery for female field hockey by using anthropometric, physiological, and skill-related tests to distinguish between regional representative (Rep, n = 35) and local club level (Club, n = 39) female field hockey players. Rep players were significantly leaner and recorded faster times for the 10-m and 40-m sprints as well as the Illinois Agility Run (with and without dribbling a hockey ball). Rep players also had greater aerobic and lower body muscular power and were more accurate in the shooting accuracy test, p < 0.05. No significant differences between groups were evident for height, body mass, speed decrement in 6 x 40-m repeated sprints, handgrip strength, or pushing speed. These results indicate that %BF, sprinting speed, agility, dribbling control, aerobic and muscular power, and shooting accuracy can distinguish between female field hockey players of varying standards. Therefore talent identification programs for female field hockey should include assessments of these physical parameters.

  12. Planetary influence in the gap of a protoplanetary disk: structure formation and an application to V1247 Ori

    NASA Astrophysics Data System (ADS)

    Alvarez-Meraz, R.; Nagel, E.; Rendon, F.; Barragan, O.

    2017-10-01

    We present a set of hydrodynamical models of a planetary system embedded in a protoplanetary disk in order to extract the number of dust structures formed in the disk, their masses and sizes, within optical depth ranges τ≤0.5, 0.5<τ<2 and τ≥2. The study of the structures shows: (1) an increase in the number of planets implies an increase in the creation rate of massive structures; (2) a lower planetary mass accretion corresponds to slower time effects for optically thin structures; (3) an increase in the number of planets allows a faster evolution of the structures in the Hill radius for the different optical depth ranges of the inner planets. An ad-hoc simulation was run using the available information of the stellar system V1247 Ori, leading to a model of a planetary system which explains the SED and is consistent with interferometric observations of structures.

  13. Effect of heavy back squats on repeated sprint performance in trained men.

    PubMed

    Duncan, M J; Thurgood, G; Oxford, S W

    2014-04-01

    This study examined the impact of post activation potentiation on repeated sprint performance in trained Rugby Union players. Ten, male, professional Rugby Union players (mean age=25.2±5.02 years) performed 7, 30-meter sprints, separated by 25 seconds, 4 minutes following back squats (90% 1 repetition maximum) or a control condition performed in a counterbalanced order. Significant condition X sprint interactions for 10-meter (P=0.02) and 30-meter (P=0.05) indicated that times were significantly faster in the PAP condition for sprints 5, 6 and 7 across both distances. Fatigue rate was also significantly lower in the PAP condition for 10-meter (P=0.023) and 30-meter (P=0.006) sprint running speed. This study evidences that a heavy resistance exercise stimulus administered four minutes prior to repeated sprints can offset the decline in sprint performance seen during subsequent maximal sprinting over 10 and 30-meters in Rugby Union players.

  14. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  15. A fast parallel clustering algorithm for molecular simulation trajectories.

    PubMed

    Zhao, Yutong; Sheong, Fu Kit; Sun, Jian; Sander, Pedro; Huang, Xuhui

    2013-01-15

    We implemented a GPU-powered parallel k-centers algorithm to perform clustering on the conformations of molecular dynamics (MD) simulations. The algorithm is up to two orders of magnitude faster than the CPU implementation. We tested our algorithm on four protein MD simulation datasets ranging from the small Alanine Dipeptide to a 370-residue Maltose Binding Protein (MBP). It is capable of grouping 250,000 conformations of the MBP into 4000 clusters within 40 seconds. To achieve this, we effectively parallelized the code on the GPU and utilize the triangle inequality of metric spaces. Furthermore, the algorithm's running time is linear with respect to the number of cluster centers. In addition, we found the triangle inequality to be less effective in higher dimensions and provide a mathematical rationale. Finally, using Alanine Dipeptide as an example, we show a strong correlation between cluster populations resulting from the k-centers algorithm and the underlying density. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  16. Parallel VLSI architecture emulation and the organization of APSA/MPP

    NASA Technical Reports Server (NTRS)

    Odonnell, John T.

    1987-01-01

    The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.

  17. Dietary tendencies as predictors of marathon time in novice marathoners.

    PubMed

    Wilson, Patrick B; Ingraham, Stacy J; Lundstrom, Chris; Rhodes, Gregory

    2013-04-01

    The effects of dietary factors such as carbohydrate (CHO) on endurance-running performance have been extensively studied under laboratory-based and simulated field conditions. Evidence from "real-life" events, however, is poorly characterized. The purpose of this observational study was to examine the associations between prerace and in-race nutrition tendencies and performance in a sample of novice marathoners. Forty-six college students (36 women and 10 men) age 21.3 ± 3.3 yr recorded diet for 3 d before, the morning of, and during a 26.2-mile marathon. Anthropometric, physiological, and performance measurements were assessed before the marathon so the associations between diet and marathon time could be included as part of a stepwise-regression model. Mean marathon time was 266 ± 42 min. A pre-marathon 2-mile time trial explained 73% of the variability in marathon time (adjusted R2 = .73, p < .001). Day-before + morning-of CHO (DBMC) was the only other significant predictor of marathon time, explaining an additional 4% of the variability in marathon time (adjusted R2 = .77, p = .006). Other factors such as age, body-mass index, gender, day-before + morning-of energy, and in-race CHO were not significant independent predictors of marathon time. In this sample of primarily novice marathoners, DBMC intake was associated with faster marathon time, independent of other known predictors. These results suggest that novice and recreational marathoners should consider consuming a moderate to high amount of CHO in the 24-36 hr before a marathon.

  18. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  19. Effects of optic flow on spontaneous overground walk-to-run transition.

    PubMed

    De Smet, Kristof; Malcolm, P; Lenoir, M; Segers, V; De Clercq, D

    2009-03-01

    Perturbations of optic flow can induce changes in walking speed since subjects modulate their speed with respect to the speed perceived from optic flow. The purpose of this study was to examine the effects of optic flow on steady-state as well as on non steady-state locomotion, i.e. on spontaneous overground walk-to-run transitions (WRT) during which subjects were able to accelerate in their preferred way. In this experiment, while subjects moved along a specially constructed hallway, a series of stripes projected on the side walls and ceiling were made to move backward (against the locomotion direction) at an absolute speed of -2 m s(-1) (condition B), or to move forward at an absolute speed of +2 m s(-1) (condition F), or to remain stationary (condition C). While condition B and condition F entailed a decrease and an increase in preferred walking speed, respectively, the spatiotemporal characteristics of the spontaneous walking acceleration prior to reaching WRT were not influenced by modified visual information. However, backward moving stripes induced a smaller speed increase when making the actual transition to running. As such, running speeds after making the WRT were lower in condition B. These results indicate that the walking acceleration prior to reaching the WRT is more robust against visual perturbations compared to walking at preferred walking speed. This could be due to a higher contribution from spinal control during the walking acceleration phase. However, the finding that subjects started to run at a lower running speed when experiencing an approaching optic flow faster than locomotion speed shows that the actual realization of the WRT is not totally independent of external cues.

  20. Minimalist Running Shoes and Injury Risk Among United States Army Soldiers.

    PubMed

    Grier, Tyson; Canham-Chervak, Michelle; Bushman, Timothy; Anderson, Morgan; North, William; Jones, Bruce H

    2016-06-01

    Minimalist running shoes (MRS) are lightweight, are extremely flexible, and have little to no cushioning. It has been thought that MRS will enhance running performance and decrease injury risk. To compare physical characteristics, fitness performance, and injury risks associated with soldiers wearing MRS and those wearing traditional running shoes (TRS). Case series; Level of evidence, 4. Participants were men in a United States Army brigade (N = 1332). Physical characteristics and Army Physical Fitness Test data were obtained by survey. Fitness performance testing was administered at the brigade, and the types of footwear worn were identified by visual inspection. Shoe types were categorized into 2 groups: TRS (stability, cushioning, and motion control) and MRS. Injuries from the previous 12 months were obtained from the Defense Medical Surveillance System. A t test was used to determine mean differences between personal characteristics, training, and fitness performance metrics by shoe type. Hazard ratios and 95% CIs were calculated to determine injury risk by shoe type, controlling for other risk factors. A majority of soldiers wore cushioning shoes (57%), followed by stability shoes (24%), MRS (17%), and motion control shoes (2%). Soldiers wearing MRS were slightly younger than those wearing TRS (P < .01); performed more push-ups, sit-ups, and pull-ups (P < .01); and ran faster during the 2-mile run (P = .01). When other risk factors were controlled, there was no difference in injury risk for running shoe type between soldiers wearing MRS compared with TRS. Soldiers who chose to wear MRS were younger and had higher physical performance scores compared with soldiers wearing TRS. When these differences are controlled, use of MRS does not appear to be associated with higher or lower injury risk in this population. © 2016 The Author(s).

  1. Mutualism and evolutionary multiplayer games: revisiting the Red King.

    PubMed

    Gokhale, Chaitanya S; Traulsen, Arne

    2012-11-22

    Coevolution of two species is typically thought to favour the evolution of faster evolutionary rates helping a species keep ahead in the Red Queen race, where 'it takes all the running you can do to stay where you are'. In contrast, if species are in a mutualistic relationship, it was proposed that the Red King effect may act, where it can be beneficial to evolve slower than the mutualistic species. The Red King hypothesis proposes that the species which evolves slower can gain a larger share of the benefits. However, the interactions between the two species may involve multiple individuals. To analyse such a situation, we resort to evolutionary multiplayer games. Even in situations where evolving slower is beneficial in a two-player setting, faster evolution may be favoured in a multiplayer setting. The underlying features of multiplayer games can be crucial for the distribution of benefits. They also suggest a link between the evolution of the rate of evolution and group size.

  2. Simulation Study of Evacuation Control Center Operations Analysis

    DTIC Science & Technology

    2011-06-01

    28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9

  3. Do Optomechanical Metasurfaces Run Out of Time?

    PubMed

    Viaene, Sophie; Ginis, Vincent; Danckaert, Jan; Tassin, Philippe

    2018-05-11

    Artificially structured metasurfaces make use of specific configurations of subwavelength resonators to efficiently manipulate electromagnetic waves. Additionally, optomechanical metasurfaces have the desired property that their actual configuration may be tuned by adjusting the power of a pump beam, as resonators move to balance pump-induced electromagnetic forces with forces due to elastic filaments or substrates. Although the reconfiguration time of optomechanical metasurfaces crucially determines their performance, the transient dynamics of unit cells from one equilibrium state to another is not understood. Here, we make use of tools from nonlinear dynamics to analyze the transient dynamics of generic optomechanical metasurfaces based on a damped-resonator model with one configuration parameter. We show that the reconfiguration time of optomechanical metasurfaces is not only limited by the elastic properties of the unit cell but also by the nonlinear dependence of equilibrium states on the pump power. For example, when switching is enabled by hysteresis phenomena, the reconfiguration time is seen to increase by over an order of magnitude. To illustrate these results, we analyze the nonlinear dynamics of a bilayer cross-wire metasurface whose optical activity is tuned by an electromagnetic torque. Moreover, we provide a lower bound for the configuration time of generic optomechanical metasurfaces. This lower bound shows that optomechanical metasurfaces cannot be faster than state-of-the-art switches at reasonable powers, even at optical frequencies.

  4. Do Optomechanical Metasurfaces Run Out of Time?

    NASA Astrophysics Data System (ADS)

    Viaene, Sophie; Ginis, Vincent; Danckaert, Jan; Tassin, Philippe

    2018-05-01

    Artificially structured metasurfaces make use of specific configurations of subwavelength resonators to efficiently manipulate electromagnetic waves. Additionally, optomechanical metasurfaces have the desired property that their actual configuration may be tuned by adjusting the power of a pump beam, as resonators move to balance pump-induced electromagnetic forces with forces due to elastic filaments or substrates. Although the reconfiguration time of optomechanical metasurfaces crucially determines their performance, the transient dynamics of unit cells from one equilibrium state to another is not understood. Here, we make use of tools from nonlinear dynamics to analyze the transient dynamics of generic optomechanical metasurfaces based on a damped-resonator model with one configuration parameter. We show that the reconfiguration time of optomechanical metasurfaces is not only limited by the elastic properties of the unit cell but also by the nonlinear dependence of equilibrium states on the pump power. For example, when switching is enabled by hysteresis phenomena, the reconfiguration time is seen to increase by over an order of magnitude. To illustrate these results, we analyze the nonlinear dynamics of a bilayer cross-wire metasurface whose optical activity is tuned by an electromagnetic torque. Moreover, we provide a lower bound for the configuration time of generic optomechanical metasurfaces. This lower bound shows that optomechanical metasurfaces cannot be faster than state-of-the-art switches at reasonable powers, even at optical frequencies.

  5. Effects of recording time and residue on dose-response by LiMgPO4: Tb, B ceramic disc synthesized via improved sintering process

    NASA Astrophysics Data System (ADS)

    Kong, Xirui; Fu, Zhilong; Que, Huiying; Fan, Yanwei; Chen, Zhaoyang; He, Chengfa

    2018-05-01

    The LiMgPO4: Tb, B ceramic disc is successfully synthesized via improved sintering method which enables the disc sample to have two flat and smooth surfaces. It is worth mentioning that the OSL signal intensity of LiMgPO4: Tb, B disc attenuates much faster than that of commercial Al2O3: C. It costs only 1 s to reduce the intensity to 10%, but the Al2O3:C needs more than 40 s to finish it. Some essential OSL properties related to the dose detection method of this sample also have been systematically investigated. Although the dose-response cure would have better linearity with longer recording time, extended recording time (≥6 s) will not make any contribution to the linearity of the curve. If the bleaching time is more than 35 s, the residue created by previous detection (high dose of 10 Gy) would do almost no influence (with a positive deviation lower than 5.59%) on next lower-dose detection (0.1 Gy). The material would reach its service life when the total-ionizing dose runs up to 30 k Gy. Therefore, the LiMgPO4: Tb, B ceramic material is a potential candidate for real-time dose monitoring with optical fiber telemetering technology.

  6. User's manual for the HYPGEN hyperbolic grid generator and the HGUI graphical user interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Chiu, Ing-Tsau; Buning, Pieter G.

    1993-01-01

    The HYPGEN program is used to generate a 3-D volume grid over a user-supplied single-block surface grid. This is accomplished by solving the 3-D hyperbolic grid generation equations consisting of two orthogonality relations and one cell volume constraint. In this user manual, the required input files and parameters and output files are described. Guidelines on how to select the input parameters are given. Illustrated examples are provided showing a variety of topologies and geometries that can be treated. HYPGEN can be used in stand-alone mode as a batch program or it can be called from within a graphical user interface HGUI that runs on Silicon Graphics workstations. This user manual provides a description of the menus, buttons, sliders, and typein fields in HGUI for users to enter the parameters needed to run HYPGEN. Instructions are given on how to configure the interface to allow HYPGEN to run either locally or on a faster remote machine through the use of shell scripts on UNIX operating systems. The volume grid generated is copied back to the local machine for visualization using a built-in hook to PLOT3D.

  7. Speed and incline during Thoroughbred horse racing: racehorse speed supports a metabolic power constraint to incline running but not to decline running

    PubMed Central

    Self, Z. T.; Spence, A. J.

    2012-01-01

    We used a radio tracking system to examine the speed of 373 racehorses on different gradients on an undulating racecourse during 33 races, each lasting a few minutes. Horses show a speed detriment on inclines (0.68 m·s−1·1% gradient−1, r2 = 0.97), the magnitude of which corresponds to trading off the metabolic cost (power) of height gain with the metabolic cost (power) of horizontal galloping. A similar relationship can be derived from published data for human runners. The horses, however, were also slower on the decline (−0.45 m·s−1·1% gradient−1, r2 = 0.92). Human athletes run faster on a decline, which can be explained by the energy gained by the center of mass from height loss. This study has shown that horses go slower, which may be attributable to the anatomical simplicity of their front legs limiting weight support and stability when going downhill. These findings provide insight into limits to athletic performance in racehorses, which may be used to inform training regimens, as well as advancing knowledge from both veterinary and basic science perspectives. PMID:22678967

  8. A 3.9 ps Time-Interval RMS Precision Time-to-Digital Converter Using a Dual-Sampling Method in an UltraScale FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Liu, Chong

    2016-10-01

    Field programmable gate arrays (FPGAs) manufactured with more advanced processing technology have faster carry chains and smaller delay elements, which are favorable for the design of tapped delay line (TDL)-style time-to-digital converters (TDCs) in FPGA. However, new challenges are posed in using them to implement TDCs with a high time precision. In this paper, we propose a bin realignment method and a dual-sampling method for TDC implementation in a Xilinx UltraScale FPGA. The former realigns the disordered time delay taps so that the TDC precision can approach the limit of its delay granularity, while the latter doubles the number of taps in the delay line so that the TDC precision beyond the cell delay limitation can be expected. Two TDC channels were implemented in a Kintex UltraScale FPGA, and the effectiveness of the new methods was evaluated. For fixed time intervals in the range from 0 to 440 ns, the average RMS precision measured by the two TDC channels reaches 5.8 ps using the bin realignment, and it further improves to 3.9 ps by using the dual-sampling method. The time precision has a 5.6% variation in the measured temperature range. Every part of the TDC, including dual-sampling, encoding, and on-line calibration, could run at a 500 MHz clock frequency. The system measurement dead time is only 4 ns.

  9. Evaluation Metrics for the Paragon XP/S-15

    NASA Technical Reports Server (NTRS)

    Traversat, Bernard; McNab, David; Nitzberg, Bill; Fineberg, Sam; Blaylock, Bruce T. (Technical Monitor)

    1993-01-01

    On February 17th 1993, the Numerical Aerodynamic Simulation (NAS) facility located at the NASA Ames Research Center installed a 224 node Intel Paragon XP/S-15 system. After its installation, the Paragon was found to be in a very immature state and was unable to support a NAS users' workload, composed of a wide range of development and production activities. As a first step towards addressing this problem, we implemented a set of metrics to objectively monitor the system as operating system and hardware upgrades were installed. The metrics were designed to measure four aspects of the system that we consider essential to support our workload: availability, utilization, functionality, and performance. This report presents the metrics collected from February 1993 to August 1993. Since its installation, the Paragon availability has improved from a low of 15% uptime to a high of 80%, while its utilization has remained low. Functionality and performance have improved from merely running one of the NAS Parallel Benchmarks to running all of them faster (between 1 and 2 times) than on the iPSC/860. In spite of the progress accomplished, fundamental limitations of the Paragon operating system are restricting the Paragon from supporting the NAS workload. The maximum operating system message passing (NORMA IPC) bandwidth was measured at 11 Mbytes/s, well below the peak hardware bandwidth (175 Mbytes/s), limiting overall virtual memory and Unix services (i.e. Disk and HiPPI I/O) performance. The high NX application message passing latency (184 microns), three times than on the iPSC/860, was found to significantly degrade performance of applications relying on small message sizes. The amount of memory available for an application was found to be approximately 10 Mbytes per node, indicating that the OS is taking more space than anticipated (6 Mbytes per node).

  10. Dyslexics’ faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation

    PubMed Central

    Jaffe-Dax, Sagi; Frenkel, Or; Ahissar, Merav

    2017-01-01

    Dyslexia is a prevalent reading disability whose underlying mechanisms are still disputed. We studied the neural mechanisms underlying dyslexia using a simple frequency-discrimination task. Though participants were asked to compare the two tones in each trial, implicit memory of previous trials affected their responses. We hypothesized that implicit memory decays faster among dyslexics. We tested this by increasing the temporal intervals between consecutive trials, and by measuring the behavioral impact and ERP responses from the auditory cortex. Dyslexics showed a faster decay of implicit memory effects on both measures, with similar time constants. Finally, faster decay of implicit memory also characterized the impact of sound regularities in benefitting dyslexics' oral reading rate. Their benefit decreased faster as a function of the time interval from the previous reading of the same non-word. We propose that dyslexics’ shorter neural adaptation paradoxically accounts for their longer reading times, since it reduces their temporal window of integration of past stimuli, resulting in noisier and less reliable predictions for both simple and complex stimuli. Less reliable predictions limit their acquisition of reading expertise. DOI: http://dx.doi.org/10.7554/eLife.20557.001 PMID:28115055

  11. Relationship between metabolic cost and muscular coactivation across running speeds.

    PubMed

    Moore, Isabel S; Jones, Andrew M; Dixon, Sharon J

    2014-11-01

    Muscular coactivation can help stabilise a joint, but contrasting results in previous gait studies highlight that it is not clear whether this is metabolically beneficial. The aim was to assess the relationship between the metabolic cost of running and muscular coactivation across different running speeds, in addition to assessing the reliability and precision of lower limb muscular coactivation. Eleven female recreational runners visited the laboratory on two separate occasions. On both occasions subjects ran at three speeds (9.1, 11 and 12 km h(-1)) for six minutes each. Oxygen consumption and electromyographic data were simultaneously recorded during the final two minutes of each speed. Temporal coactivations of lower limb muscles during the stance phase were calculated. Five muscles were assessed: rectus femoris, vastus lateralis, biceps femoris, tibialis anterior and gastrocnemius lateralis. Nonparametric correlations revealed at least one significant, positive association between lower limb muscular coactivation and the metabolic cost of running for each speed. The length of tibialis anterior activation and muscular coactivation of the biceps femoris-tibialis anterior and gastrocnemius lateralis-tibialis anterior decreased with speed. These results show that longer coactivations of the proximal (rectus femoris-biceps femoris and vastus lateralis-biceps femoris) and leg extensor (rectus femoris-gastrocnemius lateralis) muscles were related to a greater metabolic cost of running, which could be detrimental to performance. The decrease in coactivation in the flexor and distal muscles at faster speeds occurs due to the shorter duration of tibialis anterior activation as speed increases, yet stability may be maintained. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  12. Slow but tenacious: an analysis of running and gripping performance in chameleons.

    PubMed

    Herrel, Anthony; Tolley, Krystal A; Measey, G John; da Silva, Jessica M; Potgieter, Daniel F; Boller, Elodie; Boistel, Renaud; Vanhooydonck, Bieke

    2013-03-15

    Chameleons are highly specialized and mostly arboreal lizards characterized by a suite of derived characters. The grasping feet and tail are thought to be related to the arboreal lifestyle of chameleons, yet specializations for grasping are thought to exhibit a trade-off with running ability. Indeed, previous studies have demonstrated a trade-off between running and clinging performance, with faster species being poorer clingers. Here we investigate the presence of trade-offs by measuring running and grasping performance in four species of chameleon belonging to two different clades (Chamaeleo and Bradypodion). Within each clade we selected a largely terrestrial species and a more arboreal species to test whether morphology and performance are related to habitat use. Our results show that habitat drives the evolution of morphology and performance but that some of these effects are specific to each clade. Terrestrial species in both clades show poorer grasping performance than more arboreal species and have smaller hands. Moreover, hand size best predicts gripping performance, suggesting that habitat use drives the evolution of hand morphology through its effects on performance. Arboreal species also had longer tails and better tail gripping performance. No differences in sprint speed were observed between the two Chamaeleo species. Within Bradypodion, differences in sprint speed were significant after correcting for body size, yet the arboreal species were both better sprinters and had greater clinging strength. These results suggest that previously documented trade-offs may have been caused by differences between clades (i.e. a phylogenetic effect) rather than by design conflicts between running and gripping per se.

  13. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  14. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  15. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.

    PubMed

    Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.

  16. SVR versus neural-fuzzy network controllers for the sagittal balance of a biped robot.

    PubMed

    Ferreira, João P; Crisóstomo, Manuel M; Coimbra, A Paulo

    2009-12-01

    The real-time balance control of an eight-link biped robot using a zero moment point (ZMP) dynamic model is difficult due to the processing time of the corresponding equations. To overcome this limitation, two alternative intelligent computing control techniques were compared: one based on support vector regression (SVR) and another based on a first-order Takagi-Sugeno-Kang (TSK)-type neural-fuzzy (NF) network. Both methods use the ZMP error and its variation as inputs and the output is the correction of the robot's torso necessary for its sagittal balance. The SVR and the NF were trained based on simulation data and their performance was verified with a real biped robot. Two performance indexes are proposed to evaluate and compare the online performance of the two control methods. The ZMP is calculated by reading four force sensors placed under each robot's foot. The gait implemented in this biped is similar to a human gait that was acquired and adapted to the robot's size. Some experiments are presented and the results show that the implemented gait combined either with the SVR controller or with the TSK NF network controller can be used to control this biped robot. The SVR and the NF controllers exhibit similar stability, but the SVR controller runs about 50 times faster.

  17. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  18. More Symmetrical Children Have Faster and More Consistent Choice Reaction Times

    ERIC Educational Resources Information Center

    Hope, David; Bates, Timothy C.; Dykiert, Dominika; Der, Geoff; Deary, Ian J.

    2015-01-01

    Greater cognitive ability in childhood is associated with increased longevity, and speedier reaction time (RT) might account for much of this linkage. Greater bodily symmetry is linked to both higher cognitive test scores and faster RTs. It is possible, then, that differences in bodily system integrity indexed by symmetry may underlie the…

  19. Music Enhances Sleep in Preschool Children.

    ERIC Educational Resources Information Center

    Field, Tiffany

    1999-01-01

    Examined the effect of playing background classical guitar music at nap time on alternate days to toddlers and preschool children attending a model preschool. Specifically assessed music's effect on nap-time sleep onset. Found that children fell asleep faster on the music days than on the nonmusic days. Toddlers fell asleep faster than did the…

  20. Leisure-time running reduces all-cause and cardiovascular mortality risk.

    PubMed

    Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N

    2014-08-05

    Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  1. Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk

    PubMed Central

    Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.

    2014-01-01

    Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581

  2. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  3. A New Monte Carlo Filtering Method for the Diagnosis of Mission-Critical Failures

    NASA Technical Reports Server (NTRS)

    Gay, Gregory; Menzies, Tim; Davies, Misty; Gundy-Burlet, Karen

    2009-01-01

    Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points. Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the settings most likely to cause a mission-critical failure. This research benchmarks two treatment learning methods against standard optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. It is shown that these treatment learners are both faster than traditional methods and show demonstrably better results.

  4. Atmospheric imaging results from the Mars Exploration Rovers

    NASA Astrophysics Data System (ADS)

    Lemmon, M.; Athena Science Team

    The Athena science payload of the Spirit and Opportunity Mars Exploration Rovers contains instruments capable of measuring radiometric properties of the Martian atmosphere in the visible and the thermal infrared. Remote sensing instruments include Pancam, a color panoramic camera covering 0.4-1.0 microns, and Mini-TES, a thermal infrared spectrometer covering 5-29 microns. Results from atmospheric imaging by Pancam will be covered here. Visible and near-infrared aerosol opacity is monitored by direct solar imaging. Early results show dust opacity near 1 when both rovers landed. Both Spirit and Opportunity have seen dust opacity fall with time, somewhat faster at Spirit's Gusev crater landing site. Diurnal variations are also being monitored at both sites. There is no direct probe of the dust's vertical distribution, but images of the Sun near the horizon and of the twilight will provide constraints on the dust distribution. Dust optical properties and a cross-section weighted aerosol size will be estimated from Pancam images of the sky at varying geometries and times of day. A series of sky imaging sequences has been run with varying illumination geometry. The observations are similar to those reported for Mars Pathfinder.

  5. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  6. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  7. Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU

    PubMed Central

    Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.

    2013-01-01

    List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015

  8. Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.

    PubMed

    Bexelius, Tobias; Sohlberg, Antti

    2018-06-01

    Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.

  9. Enceladus: three-act play and current state

    NASA Astrophysics Data System (ADS)

    Luan, J.; Goldreich, P.

    2017-12-01

    Eccentricity (e) growth as Enceladus migrates deeper into mean motion resonance with Dione results in increased tidal heating. As the bottom of the ice shell melts, the rate of tidal heating jumps and runaway melting ensues. At the end of run-away melting, the shell's thickness has fallen below the value at which the frequency of free libration equals the orbital mean motion and e has damped to well below its current value. Subsequently, both the shell thickness and e partake in a limit cycle. As e damps toward its minimum value, the shell's thickness asymptotically approaches its resonant value from below. After minimum e, the shell thickens quickly and e grows even faster. This cycle is likely to have been repeated multiple times in the past. Currently, e is much smaller than its equilibrium value corresponding to the shell thickness. Physical libration resonance resolves this mystery, it ensures that the low-e and medium-thickness state is present for most of the time between consecutive limit cycles. It is a robust scenario that avoids fine tuning or extreme parameter choice, and naturally produces episodic stages of high heating, consistent with softening of topographical features on Enceladus.

  10. The effect of an acute ingestion of Turkish coffee on reaction time and time trial performance.

    PubMed

    Church, David D; Hoffman, Jay R; LaMonica, Michael B; Riffe, Joshua J; Hoffman, Mattan W; Baker, Kayla M; Varanoske, Alyssa N; Wells, Adam J; Fukuda, David H; Stout, Jeffrey R

    2015-01-01

    The purpose of this study was to examine the ergogenic benefits of Turkish coffee consumed an hour before exercise. In addition, metabolic, cardiovascular, and subjective measures of energy, focus and alertness were examined in healthy, recreationally active adults who were regular caffeine consumers (>200 mg per day). Twenty males (n = 10) and females (n = 10), age 24.1 ± 2.9 y; height 1.70 ± 0.09 m; body mass 73.0 ± 13.0 kg (mean ± SD), ingested both Turkish coffee [3 mg · kg(-1) BW of caffeine, (TC)], and decaffeinated Turkish coffee (DC) in a double-blind, randomized, cross-over design. Performance measures included a 5 km time trial, upper and lower body reaction to visual stimuli, and multiple object tracking. Plasma caffeine concentrations, blood pressure (BP), heart rate and subjective measures of energy, focus and alertness were assessed at baseline (BL), 30-min following coffee ingestion (30+), prior to endurance exercise (PRE) and immediately-post 5 km (IP). Metabolic measures [VO2, V E , and respiratory exchange rate (RER)] were measured during the 5 km. Plasma caffeine concentrations were significantly greater during TC (p < 0.001) at 30+, PRE, and IP compared to DC. Significantly higher energy levels were reported at 30+ and PRE for TC compared to DC. Upper body reaction performance (p = 0.023) and RER (p = 0.019) were significantly higher for TC (85.1 ± 11.6 "hits," and 0.98 ± 0.05 respectively) compared to DC (81.2 ± 13.7 "hits," and 0.96 ± 0.05, respectively). Although no significant differences (p = 0.192) were observed in 5 km run time, 12 of the 20 subjects ran faster (p = 0.012) during TC (1662 ± 252 s) compared to DC (1743 ± 296 s). Systolic BP was significantly elevated during TC in comparison to DC. No other differences (p > 0.05) were noted in any of the other performance or metabolic measures. Acute ingestion of TC resulted in a significant elevation in plasma caffeine concentrations within 30-min of consumption. TC ingestion resulted in significant performance benefits in reaction time and an increase in subjective feelings of energy in habitual caffeine users. No significant differences were noted in time for the 5 km between trials, however 60 % of the participants performed the 5 km faster during the TC trial and were deemed responders. When comparing TC to DC in responders only, significantly faster times were noted when consuming TC compared to DC. No significant benefits were noted in measures of cognitive function.

  11. Experimental sintering of ash at conduit conditions and implications for the longevity of tuffisites

    NASA Astrophysics Data System (ADS)

    Gardner, James E.; Wadsworth, Fabian B.; Llewellin, Edward W.; Watkins, James M.; Coumans, Jason P.

    2018-03-01

    Escape of gas from magma in the conduit plays a crucial role in mitigating explosivity. Tuffisite veins—ash-filled cracks that form in and around volcanic conduits—represent important gas escape pathways. Sintering of the ash infill decreases its porosity, eventually forming dense glass that is impermeable to gas. We present an experimental investigation of surface tension-driven sintering and associated densification of rhyolitic ash under shallow conduit conditions. Suites of isothermal (700-800 °C) and isobaric H2O pressure (20 and 40 MPa) experiments were run for durations of 5-90 min. Obsidian powders with two different size distributions were used: 1-1600 μm (mean size = 89 μm), and 63-400 μm (mean size = 185 μm). All samples evolved similarly through four textural phases: phase 1—loose and cohesion-less particles; phase 2—particles sintered at contacts and surrounded by fully connected tortuous pore space of up to 40% porosity; phase 3—continuous matrix of partially coalesced particles that contain both isolated spherical vesicles and connected networks of larger, contorted vesicles; phase 4—dense glass with 2-5% fully isolated vesicles that are mainly spherical. Textures evolve faster at higher temperature and higher H2O pressure. Coarse samples sinter more slowly and contain fewer, larger vesicles when fully sintered. We quantify the sintering progress by measuring porosity as a function of experimental run-time, and find an excellent collapse of data when run-time is normalized by the sintering timescale {λ}_s=η \\overline{R}/σ , where η is melt viscosity, \\overline{R} is mean particle radius, and σ is melt-gas surface tension. Because timescales of diffusive H2O equilibration are generally fast compared to those of sintering, the relevant melt viscosity is calculated from the solubility H2O content at experimental temperature and pressure. We use our results to develop a framework for estimating ash sintering rates under shallow conduit conditions, and predict that sintering of ash to dense glass can seal tuffisites in minutes to hours, depending on pressure (i.e., depth), temperature, and ash size.

  12. Anthropometric and Performance Measures for High School Basketball Players

    PubMed Central

    Greene, Joseph J.; McGuine, Timothy A.; Leverson, Glen; Best, Thomas M.

    1998-01-01

    Objective: To determine possible anthropometric and performance sex differences in a population of high school basketball players. Design and Setting: Measurements were collected during the first week of basketball practice before the 1995-1996 season. Varsity basketball players from 4 high schools were tested on a battery of measures chosen to detect possible anthropometric and performance sex differences. Subjects: Fifty-four female and sixty-one male subjects, from varsity basketball teams at high schools enrolled in the athletic training outreach program at the University of Wisconsin Hospital Sports Medicine Center in Madison, WI, volunteered to take part in this study. Measurements: We took anthropometric measurements on each of the 115 subjects. These included height, weight, body composition, ankle range of motion, and medial longitudinal arch type in weightbearing. Performance measures included the vertical jump, 22.86-m (25-yd) shuttle run, 18.29-m (20-yd) sprint, and single-limb balance time. Results: We compared anthropometric and performance characteristics using a 2-sample t test. The only exception to this was for medial longitudinal arch type, where the 2 groups were compared using a 2-tailed Fisher's exact test. The male subjects were significantly taller and heavier, while the females had a significantly higher percentage of body fat. There were no significant differences found for ankle plantar flexion and dorsiflexion, but the females had significantly more inversion and eversion range of motion. Analysis of medial longitudinal arch type found females to have a higher percentage of pronated arches and males to have a higher percentage of supinated arches. Performance testing revealed that the males were able to jump significantly higher and run the 22.86-m (25-yard) shuttle run and 18.29-m (20-yard) sprint significantly faster than the female subjects. There was no significant difference between the groups for single-limb balance time. Conclusions: We found significant anthropometric and performance sex differences in a cohort of high school basketball players. Further study of these measures is necessary to determine if these differences can predict the risk for ankle injuries in this particular population. PMID:16558515

  13. 2009 fault tolerance for extreme-scale computing workshop, Albuquerque, NM - March 19-20, 2009.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, D. S.; Daly, J.; DeBardeleben, N.

    2009-02-01

    This is a report on the third in a series of petascale workshops co-sponsored by Blue Waters and TeraGrid to address challenges and opportunities for making effective use of emerging extreme-scale computing. This workshop was held to discuss fault tolerance on large systems for running large, possibly long-running applications. The main point of the workshop was to have systems people, middleware people (including fault-tolerance experts), and applications people talk about the issues and figure out what needs to be done, mostly at the middleware and application levels, to run such applications on the emerging petascale systems, without having faults causemore » large numbers of application failures. The workshop found that there is considerable interest in fault tolerance, resilience, and reliability of high-performance computing (HPC) systems in general, at all levels of HPC. The only way to recover from faults is through the use of some redundancy, either in space or in time. Redundancy in time, in the form of writing checkpoints to disk and restarting at the most recent checkpoint after a fault that cause an application to crash/halt, is the most common tool used in applications today, but there are questions about how long this can continue to be a good solution as systems and memories grow faster than I/O bandwidth to disk. There is interest in both modifications to this, such as checkpoints to memory, partial checkpoints, and message logging, and alternative ideas, such as in-memory recovery using residues. We believe that systematic exploration of these ideas holds the most promise for the scientific applications community. Fault tolerance has been an issue of discussion in the HPC community for at least the past 10 years; but much like other issues, the community has managed to put off addressing it during this period. There is a growing recognition that as systems continue to grow to petascale and beyond, the field is approaching the point where we don't have any choice but to address this through R&D efforts.« less

  14. Breaking the Myth That Relay Swimming Is Faster Than Individual Swimming.

    PubMed

    Skorski, Sabrina; Etxebarria, Naroa; Thompson, Kevin G

    2016-04-01

    To investigate if swimming performance is better in a relay race than in the corresponding individual race. The authors analyzed 166 elite male swimmers from 15 nations in the same competition (downloaded from www.swimrankings.net). Of 778 observed races, 144 were Olympic Games performances (2000, 2004, 2012), with the remaining 634 performed in national or international competitions. The races were 100-m (n = 436) and 200-m (n = 342) freestyle events. Relay performance times for the 2nd-4th swimmers were adjusted (+ 0.73 s) to allow for the "flying start." Without any adjustment, mean individual relay performances were significantly faster for the first 50 m and overall time in the 100-m events. Furthermore, the first 100 m of the 200-m relay was significantly faster (P > .001). During relays, swimmers competing in 1st position did not show any difference compared with their corresponding individual performance (P > .16). However, swimmers competing in 2nd-4th relay-team positions demonstrated significantly faster times in the 100-m (P < .001) and first half of the 200-m relays than in their individual events (P < .001, ES: 0.28-1.77). However, when finishing times for 2nd-4th relay team positions were adjusted for the flying start no differences were detected between relay and individual race performance for any event or split time (P > .17). Highly trained swimmers do not swim (or turn) faster in relay events than in their individual races. Relay exchange times account for the difference observed in individual vs relay performance.

  15. Nudging the Arctic Ocean to quantify Arctic sea ice feedbacks

    NASA Astrophysics Data System (ADS)

    Dekker, Evelien; Severijns, Camiel; Bintanja, Richard

    2017-04-01

    It is well-established that the Arctic is warming 2 to 3 time faster than rest of the planet. One of the great uncertainties in climate research is related to what extent sea ice feedbacks amplify this (seasonally varying) Arctic warming. Earlier studies have analyzed existing climate model output using correlations and energy budget considerations in order to quantify sea ice feedbacks through indirect methods. From these analyses it is regularly inferred that sea ice likely plays an important role, but details remain obscure. Here we will take a different and a more direct approach: we will keep the sea ice constant in a sensitivity simulation, using a state-of -the-art climate model (EC-Earth), applying a technique that has never been attempted before. This experimental technique involves nudging the temperature and salinity of the ocean surface (and possibly some layers below to maintain the vertical structure and mixing) to a predefined prescribed state. When strongly nudged to existing (seasonally-varying) sea surface temperatures, ocean salinity and temperature, we force the sea ice to remain in regions/seasons where it is located in the prescribed state, despite the changing climate. Once we obtain fixed' sea ice, we will run a future scenario, for instance 2 x CO2 with and without prescribed sea ice, with the difference between these runs providing a measure as to what extent sea ice contributes to Arctic warming, including the seasonal and geographical imprint of the effects.

  16. myPresto/omegagene: a GPU-accelerated molecular dynamics simulator tailored for enhanced conformational sampling methods with a non-Ewald electrostatic scheme.

    PubMed

    Kasahara, Kota; Ma, Benson; Goto, Kota; Dasgupta, Bhaskar; Higo, Junichi; Fukuda, Ikuo; Mashimo, Tadaaki; Akiyama, Yutaka; Nakamura, Haruki

    2016-01-01

    Molecular dynamics (MD) is a promising computational approach to investigate dynamical behavior of molecular systems at the atomic level. Here, we present a new MD simulation engine named "myPresto/omegagene" that is tailored for enhanced conformational sampling methods with a non-Ewald electrostatic potential scheme. Our enhanced conformational sampling methods, e.g. , the virtual-system-coupled multi-canonical MD (V-McMD) method, replace a multi-process parallelized run with multiple independent runs to avoid inter-node communication overhead. In addition, adopting the non-Ewald-based zero-multipole summation method (ZMM) makes it possible to eliminate the Fourier space calculations altogether. The combination of these state-of-the-art techniques realizes efficient and accurate calculations of the conformational ensemble at an equilibrium state. By taking these advantages, myPresto/omegagene is specialized for the single process execution with Graphics Processing Unit (GPU). We performed benchmark simulations for the 20-mer peptide, Trp-cage, with explicit solvent. One of the most thermodynamically stable conformations generated by the V-McMD simulation is very similar to an experimentally solved native conformation. Furthermore, the computation speed is four-times faster than that of our previous simulation engine, myPresto/psygene-G. The new simulator, myPresto/omegagene, is freely available at the following URLs: http://www.protein.osaka-u.ac.jp/rcsfp/pi/omegagene/ and http://presto.protein.osaka-u.ac.jp/myPresto4/.

  17. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, D.; Alfonsi, A.; Talbot, P.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less

  18. Supply of genetic information--amount, format, and frequency.

    PubMed

    Misztal, I; Lawlor, T J

    1999-05-01

    The volume and complexity of genetic information is increasing because of new traits and better models. New traits may include reproduction, health, and carcass. More comprehensive models include the test day model in dairy cattle or a growth model in beef cattle. More complex models, which may include nonadditive effects such as inbreeding and dominance, also provide additional information. The amount of information per animal may increase drastically if DNA marker typing becomes routine and quantitative trait loci information is utilized. In many industries, evaluations are run more frequently. They result in faster genetic progress and improved management and marketing opportunities but also in extra costs and information overload. Adopting new technology and making some organizational changes can help realize all the added benefits of the improvements to the genetic evaluation systems at an acceptable cost. Continuous genetic evaluation, in which new records are accepted and breeding values are updated continuously, will relieve time pressures. An online mating system with access to both genetic and marketing information can result in mating recommendations customized for each user. Such a system could utilize inbreeding and dominance information that cannot efficiently be accommodated in the current sire summaries or off-line mating programs. The new systems will require a new organizational approach in which the task of scientists and technicians will not be simply running the evaluations but also providing the research, design, supervision, and maintenance required in the entire system of evaluation, decision making, and distribution.

  19. Better ILP models for haplotype assembly.

    PubMed

    Etemadi, Maryam; Bagherian, Mehri; Chen, Zhi-Zhong; Wang, Lusheng

    2018-02-19

    The haplotype assembly problem for diploid is to find a pair of haplotypes from a given set of aligned Single Nucleotide Polymorphism (SNP) fragments (reads). It has many applications in association studies, drug design, and genetic research. Since this problem is computationally hard, both heuristic and exact algorithms have been designed for it. Although exact algorithms are much slower, they are still of great interest because they usually output significantly better solutions than heuristic algorithms in terms of popular measures such as the Minimum Error Correction (MEC) score, the number of switch errors, and the QAN50 score. Exact algorithms are also valuable because they can be used to witness how good a heuristic algorithm is. The best known exact algorithm is based on integer linear programming (ILP) and it is known that ILP can also be used to improve the output quality of every heuristic algorithm with a little decline in speed. Therefore, faster ILP models for the problem are highly demanded. As in previous studies, we consider not only the general case of the problem but also its all-heterozygous case where we assume that if a column of the input read matrix contains at least one 0 and one 1, then it corresponds to a heterozygous SNP site. For both cases, we design new ILP models for the haplotype assembly problem which aim at minimizing the MEC score. The new models are theoretically better because they contain significantly fewer constraints. More importantly, our experimental results show that for both simulated and real datasets, the new model for the all-heterozygous (respectively, general) case can usually be solved via CPLEX (an ILP solver) at least 5 times (respectively, twice) faster than the previous bests. Indeed, the running time can sometimes be 41 times better. This paper proposes a new ILP model for the haplotype assembly problem and its all-heterozygous case, respectively. Experiments with both real and simulated datasets show that the new models can be solved within much shorter time by CPLEX than the previous bests. We believe that the models can be used to improve heuristic algorithms as well.

  20. Memetic algorithms for de novo motif-finding in biomedical sequences.

    PubMed

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

Top