NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro; Hut, Piet
2006-12-01
The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.
RAY-RAMSES: a code for ray tracing on the fly in N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barreira, Alexandre; Llinares, Claudio; Bose, Sownak
2016-05-01
We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less
OBERON: OBliquity and Energy balance Run on N-body systems
NASA Astrophysics Data System (ADS)
Forgan, Duncan H.
2016-08-01
OBERON (OBliquity and Energy balance Run on N-body systems) models the climate of Earthlike planets under the effects of an arbitrary number and arrangement of other bodies, such as stars, planets and moons. The code, written in C++, simultaneously computes N body motions using a 4th order Hermite integrator, simulates climates using a 1D latitudinal energy balance model, and evolves the orbital spin of bodies using the equations of Laskar (1986a,b).
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Statistical Analysis of CFD Solutions from 2nd Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Hemsch, M. J.; Morrison, J. H.
2004-01-01
In June 2001, the first AIAA Drag Prediction Workshop was held to evaluate results obtained from extensive N-Version testing of a series of RANS CFD codes. The geometry used for the computations was the DLR-F4 wing-body combination which resembles a medium-range subsonic transport. The cases reported include the design cruise point, drag polars at eight Mach numbers, and drag rise at three values of lift. Although comparisons of the code-to-code medians with available experimental data were similar to those obtained in previous studies, the code-to-code scatter was more than an order-of-magnitude larger than expected and far larger than desired for design and for experimental validation. The second Drag Prediction Workshop was held in June 2003 with emphasis on the determination of installed pylon-nacelle drag increments and on grid refinement studies. The geometry used was the DLR-F6 wing-body-pylon-nacelle combination for which the design cruise point and the cases run were similar to the first workshop except for additional runs on coarse and fine grids to complement the runs on medium grids. The code-to-code scatter was significantly reduced for the wing-body configuration compared to the first workshop, although still much larger than desired. However, the grid refinement studies showed no sign$cant improvement in code-to-code scatter with increasing grid refinement.
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
NASA Astrophysics Data System (ADS)
Ševeček, P.; Brož, M.; Nesvorný, D.; Enke, B.; Durda, D.; Walsh, K.; Richardson, D. C.
2017-11-01
We report on our study of asteroidal breakups, i.e. fragmentations of targets, subsequent gravitational reaccumulation and formation of small asteroid families. We focused on parent bodies with diameters Dpb = 10km . Simulations were performed with a smoothed-particle hydrodynamics (SPH) code combined with an efficient N-body integrator. We assumed various projectile sizes, impact velocities and impact angles (125 runs in total). Resulting size-frequency distributions are significantly different from scaled-down simulations with Dpb = 100km targets (Durda et al., 2007). We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions for N-body simulations of small asteroid families. Finally, we discuss a number of uncertainties related to SPH simulations.
Code C# for chaos analysis of relativistic many-body systems
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Felea, D.; Stan, E.; Esanu, T.
2010-08-01
This work presents a new Microsoft Visual C# .NET code library, conceived as a general object oriented solution for chaos analysis of three-dimensional, relativistic many-body systems. In this context, we implemented the Lyapunov exponent and the “fragmentation level” (defined using the graph theory and the Shannon entropy). Inspired by existing studies on billiard nuclear models and clusters of galaxies, we tried to apply the virial theorem for a simplified many-body system composed by nucleons. A possible application of the “virial coefficient” to the stability analysis of chaotic systems is also discussed. Catalogue identifier: AEGH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30 053 No. of bytes in distributed program, including test data, etc.: 801 258 Distribution format: tar.gz Programming language: Visual C# .NET 2005 Computer: PC Operating system: .Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread RAM: 128 Megabytes Classification: 6.2, 6.5 External routines: .Net Framework 2.0 Library Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, and energy conservation precision test. Additional comments: Easy copy/paste based deployment method. Running time: Quadratic complexity.
Adapting NBODY4 with a GRAPE-6a Supercomputer for Web Access, Using NBodyLab
NASA Astrophysics Data System (ADS)
Johnson, V.; Aarseth, S.
2006-07-01
A demonstration site has been developed by the authors that enables researchers and students to experiment with the capabilities and performance of NBODY4 running on a GRAPE-6a over the web. NBODY4 is a sophisticated open-source N-body code for high accuracy simulations of dense stellar systems (Aarseth 2003). In 2004, NBODY4 was successfully tested with a GRAPE-6a, yielding an unprecedented low-cost tool for astrophysical research. The GRAPE-6a is a supercomputer card developed by astrophysicists to accelerate high accuracy N-body simulations with a cluster or a desktop PC (Fukushige et al. 2005, Makino & Taiji 1998). The GRAPE-6a card became commercially available in 2004, runs at 125 Gflops peak, has a standard PCI interface and costs less than 10,000. Researchers running the widely used NBODY6 (which does not require GRAPE hardware) can compare their own PC or laptop performance with simulations run on http://www.NbodyLab.org. Such comparisons may help justify acquisition of a GRAPE-6a. For workgroups such as university physics or astronomy departments, the demonstration site may be replicated or serve as a model for a shared computing resource. The site was constructed using an NBodyLab server-side framework.
Bulk Chemical and Hf/W Isotopic Consequences of Lossy Accretion
NASA Astrophysics Data System (ADS)
Dwyer, C. A.; Nimmo, F.; Chambers, J.
2013-12-01
The late stages of planetary accretion involve stochastic, large collisions [1]. Many of these collisions likely resulted in hit-and-run events [2] or erosion of existing bodies' crusts [3] or mantles [4]. Here we present a preliminary investigation into the effects of lossy late-stage accretion on the bulk chemistry and isotopic characteristics of the resulting planets. Our model is composed of two parts: (1) an N-body accretion code [5] tracks the orbital and collisional evolution of the terrestrial bodies, including hit-and-run and fragmentation events; (2) post-processing evolves the chemistry in light of radioactive decay and impact-related mixing and partial equilibration. Sixteen runs were performed using the MERCURY N-body code [5]; each run contained Jupiter and Saturn in their current orbits as well as approx 150 initial bodies. Different collisional outcomes including fragmentation are possible depending on the velocity, angle, mass ratio, and total mass of the impact (modified from [6, 7]). The masses of the core and mantle of each body are tracked throughout the simulation. All bodies are assigned an initial mantle mass fraction, y, of 0.7. We track the Hf and W evolution of these bodies. Radioactive decay occurs between impacts. We calculate the effect of an impact by assuming an idealized model of mixing and partial equilibration [8]. The core equilibration factor is a free parameter; we use 0.4. Partition coefficients are assumed constant. Diversity increases as final mass decreases. The range in final y changes from 0.66-0.72 for approx Earth-mass planets to 0.41-1 for the smallest bodies in the simulation. The scatter in tungsten anomaly increases from 0.79-4.0 for approx Earth-mass to 0.11-18 for the smallest masses. This behavior is similar to that observed in our solar system in terms of both bulk and isotopic chemistry. There is no single impact event which defines the final state of the body, therefore talking about a single, specific age of formation does not make sense. Instead, it must be recognized that terrestrial planet formation occurs over a range of time spanning many tens to perhaps hundreds of millions of years. We are currently performing sensitivity analyses to determine the effect on the tungsten isotopic anomalies of the final bodies. [1] Agnor et al. (1999), Icarus 142, 219-237. [2] Asphaug et al. (2006), Nature 439, 155-160. [3] O'Neill & Palme (2008), Phil Trans R Soc Lond A 366, 4205-4238. [4] Benz et al. (2007), Sp SciRev 132, 189-202. [5] Chambers (2013), Icarus, 224, 43-56. [6] Genda et al. (2012), ApJ 744, 137. [7] Leinhardt & Stewart (2012), ApJ 745, 79. [8] Nimmo et al. (2010), EPSL 292, 363-370.
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
A new paradigm for reproducing and analyzing N-body simulations of planetary systems
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2017-05-01
The reproducibility of experiments is one of the main principles of the scientific method. However, numerical N-body experiments, especially those of planetary systems, are currently not reproducible. In the most optimistic scenario, they can only be replicated in an approximate or statistical sense. Even if authors share their full source code and initial conditions, differences in compilers, libraries, operating systems or hardware often lead to qualitatively different results. We provide a new set of easy-to-use, open-source tools that address the above issues, allowing for exact (bit-by-bit) reproducibility of N-body experiments. In addition to generating completely reproducible integrations, we show that our framework also offers novel and innovative ways to analyse these simulations. As an example, we present a high-accuracy integration of the Solar system spanning 10 Gyr, requiring several weeks to run on a modern CPU. In our framework, we can not only easily access simulation data at predefined intervals for which we save snapshots, but at any time during the integration. We achieve this by integrating an on-demand reconstructed simulation forward in time from the nearest snapshot. This allows us to extract arbitrary quantities at any point in the saved simulation exactly (bit-by-bit), and within seconds rather than weeks. We believe that the tools we present in this paper offer a new paradigm for how N-body simulations are run, analysed and shared across the community.
Non-linear structure formation in the `Running FLRW' cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2016-07-01
We present a suite of cosmological N-body simulations describing the `Running Friedmann-Lemaïtre-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Lambda cold dark matter (ΛCDM) with a time-evolving vacuum density, Λ(z), and time-evolving gravitational Newton's coupling, G(z). In this paper, we review the model and introduce the necessary analytical treatment needed to adapt a reference N-body code. Our resulting simulations represent the first realization of the full growth history of structure in the R-FLRW cosmology into the non-linear regime, and our normalization choice makes them fully consistent with the latest cosmic microwave background data. The post-processing data products also allow, for the first time, an analysis of the properties of the halo and sub-halo populations. We explore the degeneracies of many statistical observables and discuss the steps needed to break them. Furthermore, we provide a quantitative description of the deviations of R-FLRW from ΛCDM, which could be readily exploited by future cosmological observations to test and further constrain the model.
Pantelić, S; Kostić, R; Trajković, N; Sporiš, G
2015-01-01
The aims of this study were: 1) To determine the effects of a 12-week recreational soccer training programme and continuous endurance running on body composition of young adult men and 2) to determine which of these two programmes was more effective concerning body composition. Sixty-four participants completed the randomized controlled trial and were randomly assigned to one of three groups: a soccer training group (SOC; n=20), a running group (RUN; n=21) or a control group performing no physical training (CON; n=23). Training programmes for SOC and RUN lasted 12-week with 3 training sessions per week. Soccer sessions consisted of 60 min ordinary five-a-side, six-a-side or seven-a-side matches on a 30-45 m wide and 45-60 m long plastic grass pitch. Running sessions consisted of 60 min of continuous moderate intensity running at the same average heart rate as in SOC (~80% HRmax). All participants, regardless of group assignment, were tested for each of the following dependent variables: body weight, body height, body mass index, percent body fat, body fat mass, fat-free mass and total body water. In the SOC and RUN groups there was a significant decrease (p < 0.05) in body composition parameters from pre- to post-training values for all measures with the exception of fat-free mass and total body water. Body mass index, percent body fat and body fat mass did not differ between groups at baseline, but by week 12 were significantly lower (p < 0.05) in the SOC and RUN groups compared to CON. To conclude, recreational soccer training provides at least the same changes in body composition parameters as continuous running in young adult men when the training intensity is well matched. PMID:26681832
Aeroelastic Tailoring Study of N+2 Low-Boom Supersonic Commercial Transport Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2015-01-01
The Lockheed Martins N+2 Low-boom Supersonic Commercial Transport (LSCT) aircraft is optimized in this study through the use of a multidisciplinary design optimization tool developed at the NASA Armstrong Flight Research Center. A total of 111 design variables are used in the first optimization run. Total structural weight is the objective function in this optimization run. Design requirements for strength, buckling, and flutter are selected as constraint functions during the first optimization run. The MSC Nastran code is used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses are based on ZAERO code and landing and ground control loads are computed using an in-house code.
Gastin, Paul B; Tangalos, Christie; Torres, Lorena; Robertson, Sam
2017-12-01
This study investigated age-related differences in maturity, physical and functional characteristics and playing performance in youth Australian Football (AF). Young male players (n = 156) were recruited from 12 teams across 6 age groups (U10-U15) of a recreational AF club. All players were tested for body size, maturity and fitness. Player performance was assessed during a match in which disposals (kicks and handballs) and their effectiveness were coded from a video recording and match running performance measured using Global Positioning System. Significant main effects (P < 0.01) for age group were observed for age, years to peak height velocity, body mass, height, 20 m sprint, maximal speed over 20 m, vertical jump, 20 m multistage shuttle run, match distance, high-speed running distance, peak speed, number of effective disposals and percentage of effective disposals. Age-related differences in fitness characteristics (speed, lower body power and endurance) appeared to transfer to match running performance. The frequency in which players disposed of the football did not differ between age groups, however the effectiveness of each disposal (i.e., % effective disposals) improved with age. Match statistics, particularly those that evaluate skill execution outcome (i.e., effectiveness), are useful to assess performance and to track player development over time. Differences between age groups, and probably variability within age groups, are strongly associated with chronological age and maturity.
ICE-COLA: towards fast and accurate synthetic galaxy catalogues optimizing a quasi-N-body method
NASA Astrophysics Data System (ADS)
Izard, Albert; Crocce, Martin; Fosalba, Pablo
2016-07-01
Next generation galaxy surveys demand the development of massive ensembles of galaxy mocks to model the observables and their covariances, what is computationally prohibitive using N-body simulations. COmoving Lagrangian Acceleration (COLA) is a novel method designed to make this feasible by following an approximate dynamics but with up to three orders of magnitude speed-ups when compared to an exact N-body. In this paper, we investigate the optimization of the code parameters in the compromise between computational cost and recovered accuracy in observables such as two-point clustering and halo abundance. We benchmark those observables with a state-of-the-art N-body run, the MICE Grand Challenge simulation. We find that using 40 time-steps linearly spaced since zI ˜ 20, and a force mesh resolution three times finer than that of the number of particles, yields a matter power spectrum within 1 per cent for k ≲ 1 h Mpc-1 and a halo mass function within 5 per cent of those in the N-body. In turn, the halo bias is accurate within 2 per cent for k ≲ 0.7 h Mpc-1 whereas, in redshift space, the halo monopole and quadrupole are within 4 per cent for k ≲ 0.4 h Mpc-1. These results hold for a broad range in redshift (0 < z < 1) and for all halo mass bins investigated (M > 1012.5 h-1 M⊙). To bring accuracy in clustering to one per cent level we study various methods that re-calibrate halo masses and/or velocities. We thus propose an optimized choice of COLA code parameters as a powerful tool to optimally exploit future galaxy surveys.
Parallel implementation of an adaptive and parameter-free N-body integrator
NASA Astrophysics Data System (ADS)
Pruett, C. David; Ingham, William H.; Herman, Ralph D.
2011-05-01
Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.
Aeroelastic Tailoring Study of N+2 Low Boom Supersonic Commerical Transport Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs.
A Three-Dimensional Unsteady CFD Model of Compressor Stability
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
2006-01-01
A three-dimensional unsteady CFD code called CSTALL has been developed and used to investigate compressor stability. The code solved the Euler equations through the entire annulus and all blade rows. Blade row turning, losses, and deviation were modeled using body force terms which required input data at stations between blade rows. The input data was calculated using a separate Navier-Stokes turbomachinery analysis code run at one operating point near stall, and was scaled to other operating points using overall characteristic maps. No information about the stalled characteristic was used. CSTALL was run in a 2-D throughflow mode for very fast calculations of operating maps and estimation of stall points. Calculated pressure ratio characteristics for NASA stage 35 agreed well with experimental data, and results with inlet radial distortion showed the expected loss of range. CSTALL was also run in a 3-D mode to investigate inlet circumferential distortion. Calculated operating maps for stage 35 with 120 degree distortion screens showed a loss in range and pressure rise. Unsteady calculations showed rotating stall with two part-span stall cells. The paper describes the body force formulation in detail, examines the computed results, and concludes with observations about the code.
NASA Astrophysics Data System (ADS)
Beraldo e Silva, Leandro; de Siqueira Pedra, Walter; Sodré, Laerte; Perico, Eder L. D.; Lima, Marcos
2017-09-01
The collapse of a collisionless self-gravitating system, with the fast achievement of a quasi-stationary state, is driven by violent relaxation, with a typical particle interacting with the time-changing collective potential. It is traditionally assumed that this evolution is governed by the Vlasov-Poisson equation, in which case entropy must be conserved. We run N-body simulations of isolated self-gravitating systems, using three simulation codes, NBODY-6 (direct summation without softening), NBODY-2 (direct summation with softening), and GADGET-2 (tree code with softening), for different numbers of particles and initial conditions. At each snapshot, we estimate the Shannon entropy of the distribution function with three different techniques: Kernel, Nearest Neighbor, and EnBiD. For all simulation codes and estimators, the entropy evolution converges to the same limit as N increases. During violent relaxation, the entropy has a fast increase followed by damping oscillations, indicating that violent relaxation must be described by a kinetic equation other than the Vlasov-Poisson equation, even for N as large as that of astronomical structures. This indicates that violent relaxation cannot be described by a time-reversible equation, shedding some light on the so-called “fundamental paradox of stellar dynamics.” The long-term evolution is well-described by the orbit-averaged Fokker-Planck model, with Coulomb logarithm values in the expected range 10{--}12. By means of NBODY-2, we also study the dependence of the two-body relaxation timescale on the softening length. The approach presented in the current work can potentially provide a general method for testing any kinetic equation intended to describe the macroscopic evolution of N-body systems.
Multitasking for flows about multiple body configurations using the chimera grid scheme
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Morgan, R. L.
1987-01-01
The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less
NASA Astrophysics Data System (ADS)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
The Arbitrary Body of Revolution Code (ABORC) for SGEMP/IEMP
1976-07-01
Ill, ,4 t iwv. dependent Spect ria, I’a eallt rlllt ,ýcltllt i , itlld currll - in.icct iwill silIkit ion tests of satel I ites. "S1 1’. Waanaasl ; et...time. For example, in the case where the emission is due to,. photon interaction with materials, the photon energy and time spect run determines the...ally performed by separating the i. onse of the in-._ tn, p rtion of ’he problem from thai of the external iort(n. Thus, 0i details of tbi - internal
A Modern Take on the RV Classics: N-body Analysis of GJ 876 and 55 Cnc
NASA Astrophysics Data System (ADS)
Nelson, Benjamin E.; Ford, E. B.; Wright, J.
2013-01-01
Over the past two decades, radial velocity (RV) observations have uncovered a diverse population of exoplanet systems, in particular a subset of multi-planet systems that exhibit strong dynamical interactions. To extract the model parameters (and uncertainties) accurately from these observations, one requires self-consistent n-body integrations and must explore a high-dimensional 7 x number of planets) parameter space, both of which are computationally challenging. Utilizing the power of modern computing resources, we apply our Radial velocity Using N-body Differential Evolution Markov Chain Monte Carlo code (RUN DEMCMC) to two landmark systems from early exoplanet surveys: GJ 876 and 55 Cnc. For GJ 876, we analyze the Keck HIRES (Rivera et al. 2010) and HARPS (Correia et al. 2010) data and constrain the distribution of the Laplace argument. For 55 Cnc, we investigate the orbital architecture based on a cumulative 1086 RV observations from various sources and transit constraints from Winn et al. 2011. In both cases, we also test for long-term orbital stability.
Hybrid petacomputing meets cosmology: The Roadrunner Universe project
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James
2009-07-01
The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.
The dynamics of stellar discs in live dark-matter haloes
NASA Astrophysics Data System (ADS)
Fujii, M. S.; Bédorf, J.; Baba, J.; Portegies Zwart, S.
2018-06-01
Recent developments in computer hardware and software enable researchers to simulate the self-gravitating evolution of galaxies at a resolution comparable to the actual number of stars. Here we present the results of a series of such simulations. We performed N-body simulations of disc galaxies with between 100 and 500 million particles over a wide range of initial conditions. Our calculations include a live bulge, disc, and dark-matter halo, each of which is represented by self-gravitating particles in the N-body code. The simulations are performed using the gravitational N-body tree-code BONSAI running on the Piz Daint supercomputer. We find that the time-scale over which the bar forms increases exponentially with decreasing disc-mass fraction and that the bar formation epoch exceeds a Hubble time when the disc-mass fraction is ˜0.35. These results can be explained with the swing-amplification theory. The condition for the formation of m = 2 spirals is consistent with that for the formation of the bar, which is also an m = 2 phenomenon. We further argue that the non-barred grand-design spiral galaxies are transitional, and that they evolve to barred galaxies on a dynamical time-scale. We also confirm that the disc-mass fraction and shear rate are important parameters for the morphology of disc galaxies. The former affects the number of spiral arms and the bar formation epoch, and the latter determines the pitch angle of the spiral arms.
Running economy and body composition between competitive and recreational level distance runners.
Mooses, Martin; Jürimäe, J; Mäestu, J; Mooses, K; Purge, P; Jürimäe, T
2013-09-01
The aim of the present study was to compare running economy between competitive and recreational level athletes at their individual ventilatory thresholds on track and to compare body composition parameters that are related to the individual running economy measured on track. We performed a cross-sectional analysis of a total 45 male runners classified as competitive runners (CR; n = 28) and recreational runners (RR; n = 17). All runners performed an incremental test on treadmill until voluntary exhaustion and at least 48 h later a 2 × 2000 m test at indoor track with intensities according to ventilatory threshold 1, ventilator threshold 2. During the running tests, athletes wore portable oxygen analyzer. Body composition was measured with Dual energy X-ray absorptiometry (DXA) method. Running economy at the first ventilatory threshold was not significantly related to any of the measured body composition values or leg mass ratios either in the competitive or in the recreational runners group. This study showed that there was no difference in the running economy between distance runners with different performance level when running on track, while there was a difference in the second ventilatory threshold speed in different groups of distance runners. Differences in running economy between competitive and recreational athletes cannot be explained by body composition and/or different leg mass ratios.
NASA Astrophysics Data System (ADS)
Terashima, Atsunori; Nilsson, Mikael; Ozawa, Masaki; Chiba, Satoshi
2017-09-01
The Aprés ORIENT research program, as a concept of advanced nuclear fuel cycle, was initiated in FY2011 aiming at creating stable, highly-valuable elements by nuclear transmutation from ↓ssion products. In order to simulate creation of such elements by (n, γ) reaction succeeded by β- decay in reactors, a continuous-energy Monte Carlo burnup calculation code MVP-BURN was employed. Then, it is one of the most important tasks to con↓rm the reliability of MVP-BURN code and evaluated neutron cross section library. In this study, both an experiment of neutron activation analysis in TRIGA Mark I reactor at University of California, Irvine and the corresponding burnup calculation using MVP-BURN code were performed for validation of the simulation on transmutation of light platinum group elements. Especially, some neutron capture reactions such as 102Ru(n, γ)103Ru, 104Ru(n, γ)105Ru, and 108Pd(n, γ)109Pd were dealt with in this study. From a comparison between the calculation (C) and the experiment (E) about 102Ru(n, γ)103Ru, the deviation (C/E-1) was signi↓cantly large. Then, it is strongly suspected that not MVP-BURN code but the neutron capture cross section of 102Ru belonging to JENDL-4.0 used in this simulation have made the big di↑erence as (C/E-1) >20%.
JSPAM: A restricted three-body code for simulating interacting galaxies
NASA Astrophysics Data System (ADS)
Wallin, J. F.; Holincheck, A. J.; Harvey, A.
2016-07-01
Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.
The MOLDY short-range molecular dynamics package
NASA Astrophysics Data System (ADS)
Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.
2011-12-01
We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as measuring thermodynamic properities, diffusion, radiation damage, fracture, twinning deformation, nucleation and growth of phase transitions, sputtering etc. In the vast majority of materials, the interactions are non-pairwise, and the code must be able to deal with many-body forces. Solution method: Molecular dynamics involves integrating Newton's equations of motion. MOLDY uses verlet (for good energy conservation) or predictor-corrector (for accurate trajectories) algorithms. It is parallelised using open MP. It also includes a static minimisation routine to find the lowest energy structure. Boundary conditions for surfaces, clusters, grain boundaries, thermostat (Nose), barostat (Parrinello-Rahman), and externally applied strain are provided. The initial configuration can be either a repeated unit cell or have all atoms given explictly. Initial velocities are generated internally, but it is also possible to specify the velocity of a particular atom. A wide range of interatomic force models are implemented, including embedded atom, Morse or Lennard-Jones. Thus the program is especially well suited to calculations of metals. Restrictions: The code is designed for short-ranged potentials, and there is no Ewald sum. Thus for long range interactions where all particles interact with all others, the order- N scaling will fail. Different interatomic potential forms require recompilation of the code. Additional comments: There is a set of associated open-source analysis software for postprocessing and visualisation. This includes local crystal structure recognition and identification of topological defects. Running time: A set of test modules for running time are provided. The code scales as order N. The parallelisation shows near-linear scaling with number of processors in a shared memory environment. A typical run of a few tens of nanometers for a few nanoseconds will run on a timescale of days on a multiprocessor desktop.
A users manual for the method of moments Aircraft Modeling Code (AMC), version 2
NASA Technical Reports Server (NTRS)
Peters, M. E.; Newman, E. H.
1994-01-01
This report serves as a user's manual for Version 2 of the 'Aircraft Modeling Code' or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. This report describes the input command language and also includes several examples which illustrate typical code inputs and outputs.
A user's manual for the method of moments Aircraft Modeling Code (AMC)
NASA Technical Reports Server (NTRS)
Peters, M. E.; Newman, E. H.
1989-01-01
This report serves as a user's manual for the Aircraft Modeling Code or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. The input command language is described and several examples which illustrate typical code inputs and outputs are also included.
Computer simulation of multigrid body dynamics and control
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Moon, Young I.; Venkayya, V. B.
1990-01-01
The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.
Cosmological neutrino simulations at extreme scale
Emberson, J. D.; Yu, Hao-Ran; Inman, Derek; ...
2017-08-01
Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoretical predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method ofmore » data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the world’s largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.« less
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Bach, M; Hoffmann, M B
2018-06-01
The data presented in this article are related to the research article entitled "Retinal conduction speed analysis reveals different origins of the P50 and N95 components of the (multifocal) pattern electroretinogram" (Bach et al., 2018) [1]. That analysis required the individual length data of the retinal nerve fibers (from ganglion cell body to optic nerve head, depending on the position of the ganglion cell body). Jansonius et al. (2009, 2012) [2,3] mathematically modeled the path morphology of the human retinal nerve fibers. We here present a working implementation with source code (for the free and open-source programming environment "R") of the Jansonius' formulas, including all errata. One file defines Jansonius et al.'s "phi" function. This function allows quantitative modelling of paths (and any measures derived from them) of the retinal nerve fibers. As a working demonstration, a second file contains a graph which plots samples of nerve fibers. The included R code runs in base R without the need of any additional packages.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
User's Manual for FEMOM3DR. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
FEMoM3DR is a computer code written in FORTRAN 77 to compute radiation characteristics of antennas on 3D body using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. The code is written to handle different feeding structures like coaxial line, rectangular waveguide, and circular waveguide. This code uses the tetrahedral elements, with vector edge basis functions for FEM and triangular elements with roof-top basis functions for MoM. By virtue of FEM, this code can handle any arbitrary shaped three dimensional bodies with inhomogeneous lossy materials; and due to MoM the computational domain can be terminated in any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
N-MODY: a code for collisionless N-body simulations in modified Newtonian dynamics.
NASA Astrophysics Data System (ADS)
Londrillo, P.; Nipoti, C.
We describe the numerical code N-MODY, a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamek, Julian; Daverio, David; Durrer, Ruth
We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less
GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids
NASA Astrophysics Data System (ADS)
Hubber, D. A.; Rosotti, G. P.; Booth, R. A.
2018-01-01
GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.
Development and Evaluation of an Order-N Formulation for Multi-Flexible Body Space Systems
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Quiocho, Leslie J.
2013-01-01
This paper presents development of a generic recursive Order-N algorithm for systems with rigid and flexible bodies, in tree or closed-loop topology, with N being the number of bodies of the system. Simulation results are presented for several test cases to verify and evaluate the performance of the code compared to an existing efficient dense mass matrix-based code. The comparison brought out situations where Order-N or mass matrix-based algorithms could be useful.
CUBE: Information-optimized parallel cosmological N-body simulation code
NASA Astrophysics Data System (ADS)
Yu, Hao-Ran; Pen, Ue-Li; Wang, Xin
2018-05-01
CUBE, written in Coarray Fortran, is a particle-mesh based parallel cosmological N-body simulation code. The memory usage of CUBE can approach as low as 6 bytes per particle. Particle pairwise (PP) force, cosmological neutrinos, spherical overdensity (SO) halofinder are included.
Body Temperature and Energy Metabolism of Brown Lemming in Relation to Running Speed,
1979-01-01
ADASOG 382 ARCTIC INST OF NORTH AMERICA ARLINGTON VA F/B 6/16 BOOT TEMPERATURE AND ENERGY METABOLISM OF BROWN LEMMING IN RELA--ETC(U) W4LSIID 1979 T...M CASEY N00014-75-C-0635UNCLASSIFIEDh l o I - Body temperature and energy metabolism *of brown lemming in relation to running speed) by Timothy M...Casey Dept. of E. Physiology Cook College, Rutgers University New Brunswick, New Jersey 08903 C2 Running head: Metabolism and Tb of running lemmings. ALU
Improving fast generation of halo catalogues with higher order Lagrangian perturbation theory
NASA Astrophysics Data System (ADS)
Munari, Emiliano; Monaco, Pierluigi; Sefusatti, Emiliano; Castorina, Emanuele; Mohammad, Faizan G.; Anselmi, Stefano; Borgani, Stefano
2017-03-01
We present the latest version of PINOCCHIO, a code that generates catalogues of dark matter haloes in an approximate but fast way with respect to an N-body simulation. This code version implements a new on-the-fly production of halo catalogue on the past light cone with continuous time sampling, and the computation of particle and halo displacements are extended up to third-order Lagrangian perturbation theory (LPT), in contrast with previous versions that used Zel'dovich approximation. We run PINOCCHIO on the same initial configuration of a reference N-body simulation, so that the comparison extends to the object-by-object level. We consider haloes at redshifts 0 and 1, using different LPT orders either for halo construction or to compute halo final positions. We compare the clustering properties of PINOCCHIO haloes with those from the simulation by computing the power spectrum and two-point correlation function in real and redshift space (monopole and quadrupole), the bispectrum and the phase difference of halo distributions. We find that 2LPT and 3LPT give noticeable improvement. 3LPT provides the best agreement with N-body when it is used to displace haloes, while 2LPT gives better results for constructing haloes. At the highest orders, linear bias is typically recovered at a few per cent level. In Fourier space and using 3LPT for halo displacements, the halo power spectrum is recovered to within 10 per cent up to kmax ∼ 0.5 h Mpc-1. The results presented in this paper have interesting implications for the generation of large ensemble of mock surveys for the scientific exploitation of data from big surveys.
Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-12-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
Global positioning systems (GPS) and microtechnology sensors in team sports: a systematic review.
Cummins, Cloe; Orr, Rhonda; O'Connor, Helen; West, Cameron
2013-10-01
Use of Global positioning system (GPS) technology in team sport permits measurement of player position, velocity, and movement patterns. GPS provides scope for better understanding of the specific and positional physiological demands of team sport and can be used to design training programs that adequately prepare athletes for competition with the aim of optimizing on-field performance. The objective of this study was to conduct a systematic review of the depth and scope of reported GPS and microtechnology measures used within individual sports in order to present the contemporary and emerging themes of GPS application within team sports. A systematic review of the application of GPS technology in team sports was conducted. We systematically searched electronic databases from earliest record to June 2012. Permutations of key words included GPS; male and female; age 12-50 years; able-bodied; and recreational to elite competitive team sports. The 35 manuscripts meeting the eligibility criteria included 1,276 participants (age 11.2-31.5 years; 95 % males; 53.8 % elite adult athletes). The majority of manuscripts reported on GPS use in various football codes: Australian football league (AFL; n = 8), soccer (n = 7), rugby union (n = 6), and rugby league (n = 6), with limited representation in other team sports: cricket (n = 3), hockey (n = 3), lacrosse (n = 1), and netball (n = 1). Of the included manuscripts, 34 (97 %) detailed work rate patterns such as distance, relative distance, speed, and accelerations, with only five (14.3 %) reporting on impact variables. Activity profiles characterizing positional play and competitive levels were also described. Work rate patterns were typically categorized into six speed zones, ranging from 0 to 36.0 km·h⁻¹, with descriptors ranging from walking to sprinting used to identify the type of activity mainly performed in each zone. With the exception of cricket, no standardized speed zones or definitions were observed within or between sports. Furthermore, speed zone criteria often varied widely within (e.g. zone 3 of AFL ranged from 7 to 16 km·h⁻¹) and between sports (e.g. zone 3 of soccer ranged from 3.0 to <13 km·h⁻¹ code). Activity descriptors for a zone also varied widely between sports (e.g. zone 4 definitions ranged from jog, run, high velocity, to high-intensity run). Most manuscripts focused on the demands of higher intensity efforts (running and sprint) required by players. Body loads and impacts, also summarized into six zones, showed small variations in descriptions, with zone criteria based upon grading systems provided by GPS manufacturers. This systematic review highlights that GPS technology has been used more often across a range of football codes than across other team sports. Work rate pattern activities are most often reported, whilst impact data, which require the use of microtechnology sensors such as accelerometers, are least reported. There is a lack of consistency in the definition of speed zones and activity descriptors, both within and across team sports, thus underscoring the difficulties encountered in meaningful comparisons of the physiological demands both within and between team sports. A consensus on definitions of speed zones and activity descriptors within sports would facilitate direct comparison of the demands within the same sport. Meta-analysis from systematic review would also be supported. Standardization of speed zones between sports may not be feasible due to disparities in work rate pattern activities.
Voluntary resistance running wheel activity pattern and skeletal muscle growth in rats.
Legerlotz, Kirsten; Elliott, Bradley; Guillemin, Bernard; Smith, Heather K
2008-06-01
The aims of this study were to characterize the pattern of voluntary activity of young rats in response to resistance loading on running wheels and to determine the effects of the activity on the growth of six limb skeletal muscles. Male Sprague-Dawley rats (4 weeks old) were housed individually with a resistance running wheel (R-RUN, n = 7) or a conventional free-spinning running wheel (F-RUN, n = 6) or without a wheel, as non-running control animals (CON, n = 6). The torque required to move the wheel in the R-RUN group was progressively increased, and the activity (velocity, distance and duration of each bout) of the two running wheel groups was recorded continuously for 45 days. The R-RUN group performed many more, shorter and faster bouts of running than the F-RUN group, yet the mean daily distance was not different between the F-RUN (1.3 +/- 0.2 km) and R-RUN group (1.4 +/- 0.6 km). Only the R-RUN resulted in a significantly (P < 0.05) enhanced muscle wet mass, relative to the increase in body mass, of the plantaris (23%) and vastus lateralis muscle (17%), and the plantaris muscle fibre cross-sectional area, compared with CON. Both F-RUN and R-RUN led to a significantly greater wet mass relative to increase in body mass and muscle fibre cross-sectional area in the soleus muscle compared with CON. We conclude that the pattern of voluntary activity on a resistance running wheel differs from that on a free-spinning running wheel and provides a suitable model to induce physiological muscle hypertrophy in rats.
N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics
NASA Astrophysics Data System (ADS)
Londrillo, Pasquale; Nipoti, Carlo
2011-02-01
N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.
Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro
2008-03-01
We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.
PoMiN: A Post-Minkowskian N-body Solver
NASA Astrophysics Data System (ADS)
Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard
2018-06-01
In this paper, we introduce PoMiN, a lightweight N-body code based on the post-Minkowskian N-body Hamiltonian of Ledvinka et al., which includes general relativistic effects up to first order in Newton’s constant G, and all orders in the speed of light c. PoMiN is written in C and uses a fourth-order Runge–Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless), with a computational complexity that scales as O(N 2). We describe the methods we used to simplify and organize the Hamiltonian, and the tests we performed (convergence, conservation, and analytical comparison tests) to validate the code.
Buchheit, Martin; Mendez-Villanueva, Alberto
2014-01-01
The aim of the present study was to compare, in 36 highly trained under-15 soccer players, the respective effects of age, maturity and body dimensions on match running performance. Maximal sprinting (MSS) and aerobic speeds were estimated. Match running performance was analysed with GPS (GPSport, 1 Hz) during 19 international friendly games (n = 115 player-files). Total distance and distance covered >16 km h(-1) (D > 16 km h(-1)) were collected. Players advanced in age and/or maturation, or having larger body dimensions presented greater locomotor (Cohen's d for MSS: 0.5-1.0, likely to almost certain) and match running performances (D > 16 km h(-1): 0.2-0.5, possibly to likely) than their younger, less mature and/or smaller teammates. These age-, maturation- and body size-related differences were of larger magnitude for field test measures versus match running performance. Compared with age and body size (unclear to likely), maturation (likely to almost certainly for all match variables) had the greatest impact on match running performance. The magnitude of the relationships between age, maturation and body dimensions and match running performance were position-dependent. Within a single age-group in the present player sample, maturation had a substantial impact on match running performance, especially in attacking players. Coaches may need to consider players' maturity status when assessing their on-field playing performance.
Fast decoder for local quantum codes using Groebner basis
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2013-03-01
Based on arXiv:1204.1063. A local translation-invariant quantum code has a description in terms of Laurent polynomials. As an application of this observation, we present a fast decoding algorithm for translation-invariant local quantum codes in any spatial dimensions using the straightforward division algorithm for multivariate polynomials. The running time is O (n log n) on average, or O (n2 log n) on worst cases, where n is the number of physical qubits. The algorithm improves a subroutine of the renormalization-group decoder by Bravyi and Haah (arXiv:1112.3252) in the translation-invariant case. This work is supported in part by the Insitute for Quantum Information and Matter, an NSF Physics Frontier Center, and the Korea Foundation for Advanced Studies.
Trabecular bone in the calcaneus of runners
Holt, Brigitte; Troy, Karen; Hamill, Joseph
2017-01-01
Trabecular bone of the human calcaneus is subjected to extreme repetitive forces during endurance running and should adapt in response to this strain. To assess possible bone functional adaptation in the posterior region of the calcaneus, we recruited forefoot-striking runners (n = 6), rearfoot-striking runners (n = 6), and non-runners (n = 6), all males aged 20–41 for this institutionally approved study. Foot strike pattern was confirmed for each runner using a motion capture system. We obtained high resolution peripheral computed tomography scans of the posterior calcaneus for both runners and non-runners. No statistically significant differences were found between runners and nonrunners or forefoot strikers and rearfoot strikers. Mean trabecular thickness and mineral density were greatest in forefoot runners with strong effect sizes (<0.80). Trabecular thickness was positively correlated with weekly running distance (r2 = 0.417, p<0.05) and years running (r2 = 0.339, p<0.05) and negatively correlated with age at onset of running (r2 = 0.515, p<0.01) Trabecular thickness, mineral density and bone volume ratio of nonrunners were highly correlated with body mass (r2 = 0.824, p<0.05) and nonrunners were significantly heavier than runners (p<0.05). Adjusting for body mass revealed significantly thicker trabeculae in the posterior calcaneus of forefoot strikers, likely an artifact of greater running volume and earlier onset of running in this subgroup; thus, individuals with the greatest summative loading stimulus had, after body mass adjustment, the thickest trabeculae. Further study with larger sample sizes is necessary to elucidate the role of footstrike on calcaneal trabecular structure. To our knowledge, intraspecific body mass correlations with measures of trabecular robusticity have not been reported elsewhere. We hypothesize that early adoption of running and years of sustained moderate volume running stimulate bone modeling in trabeculae of the posterior calcaneus. PMID:29141022
Trabecular bone in the calcaneus of runners.
Best, Andrew; Holt, Brigitte; Troy, Karen; Hamill, Joseph
2017-01-01
Trabecular bone of the human calcaneus is subjected to extreme repetitive forces during endurance running and should adapt in response to this strain. To assess possible bone functional adaptation in the posterior region of the calcaneus, we recruited forefoot-striking runners (n = 6), rearfoot-striking runners (n = 6), and non-runners (n = 6), all males aged 20-41 for this institutionally approved study. Foot strike pattern was confirmed for each runner using a motion capture system. We obtained high resolution peripheral computed tomography scans of the posterior calcaneus for both runners and non-runners. No statistically significant differences were found between runners and nonrunners or forefoot strikers and rearfoot strikers. Mean trabecular thickness and mineral density were greatest in forefoot runners with strong effect sizes (<0.80). Trabecular thickness was positively correlated with weekly running distance (r2 = 0.417, p<0.05) and years running (r2 = 0.339, p<0.05) and negatively correlated with age at onset of running (r2 = 0.515, p<0.01) Trabecular thickness, mineral density and bone volume ratio of nonrunners were highly correlated with body mass (r2 = 0.824, p<0.05) and nonrunners were significantly heavier than runners (p<0.05). Adjusting for body mass revealed significantly thicker trabeculae in the posterior calcaneus of forefoot strikers, likely an artifact of greater running volume and earlier onset of running in this subgroup; thus, individuals with the greatest summative loading stimulus had, after body mass adjustment, the thickest trabeculae. Further study with larger sample sizes is necessary to elucidate the role of footstrike on calcaneal trabecular structure. To our knowledge, intraspecific body mass correlations with measures of trabecular robusticity have not been reported elsewhere. We hypothesize that early adoption of running and years of sustained moderate volume running stimulate bone modeling in trabeculae of the posterior calcaneus.
ZENO: N-body and SPH Simulation Codes
NASA Astrophysics Data System (ADS)
Barnes, Joshua E.
2011-02-01
The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.
Milanović, Zoran; Pantelić, Saša; Sporiš, Goran; Mohr, Magni; Krustrup, Peter
2015-01-01
The purpose of this study was to determine the effects of recreational soccer (SOC) compared to moderate-intensity continuous running (RUN) on all health-related physical fitness components in healthy untrained men. Sixty-nine participants were recruited and randomly assigned to one of three groups, of which sixty-four completed the study: a soccer training group (SOC; n = 20, 34±4 (means±SD) years, 78.1±8.3 kg, 179±4 cm); a running group (RUN; n = 21, 32±4 years, 78.0±5.5 kg, 179±7 cm); or a passive control group (CON; n = 23, 30±3 years, 76.6±12.0 kg, 178±8 cm). The training intervention lasted 12 weeks and consisted of three 60-min sessions per week. All participants were tested for each of the following physical fitness components: maximal aerobic power, minute ventilation, maximal heart rate, squat jump (SJ), countermovement jump with arm swing (CMJ), sit-and-reach flexibility, and body composition. Over the 12 weeks, VO2max relative to body weight increased more (p<0.05) in SOC (24.2%, ES = 1.20) and RUN (21.5%, ES = 1.17) than in CON (-5.0%, ES = -0.24), partly due to large changes in body mass (-5.9, -5.7 and +2.6 kg, p<0.05 for SOC, RUN and CON, respectively). Over the 12 weeks, SJ and CMJ performance increased more (p<0.05) in SOC (14.8 and 12.1%, ES = 1.08 and 0.81) than in RUN (3.3 and 3.0%, ES = 0.23 and 0.19) and CON (0.3 and 0.2%), while flexibility also increased more (p<0.05) in SOC (94%, ES = 0.97) than in RUN and CON (0-2%). In conclusion, untrained men displayed marked improvements in maximal aerobic power after 12 weeks of soccer training and moderate-intensity running, partly due to large decreases in body mass. Additionally soccer training induced pronounced positive effects on jump performance and flexibility, making soccer an effective broad-spectrum fitness training intervention.
A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu
2011-04-20
We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less
Improved Algorithms Speed It Up for Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazi, A
2005-09-20
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less
Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Graß, Tobias
2012-03-01
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
The Forest Method as a New Parallel Tree Method with the Sectional Voronoi Tessellation
NASA Astrophysics Data System (ADS)
Yahagi, Hideki; Mori, Masao; Yoshii, Yuzuru
1999-09-01
We have developed a new parallel tree method which will be called the forest method hereafter. This new method uses the sectional Voronoi tessellation (SVT) for the domain decomposition. The SVT decomposes a whole space into polyhedra and allows their flat borders to move by assigning different weights. The forest method determines these weights based on the load balancing among processors by means of the overload diffusion (OLD). Moreover, since all the borders are flat, before receiving the data from other processors, each processor can collect enough data to calculate the gravity force with precision. Both the SVT and the OLD are coded in a highly vectorizable manner to accommodate on vector parallel processors. The parallel code based on the forest method with the Message Passing Interface is run on various platforms so that a wide portability is guaranteed. Extensive calculations with 15 processors of Fujitsu VPP300/16R indicate that the code can calculate the gravity force exerted on 105 particles in each second for some ideal dark halo. This code is found to enable an N-body simulation with 107 or more particles for a wide dynamic range and is therefore a very powerful tool for the study of galaxy formation and large-scale structure in the universe.
NASA Technical Reports Server (NTRS)
Treiber, David A.; Muilenburg, Dennis A.
1995-01-01
The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.
The valid measurement of running economy in runners.
Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P
2014-10-01
Oxygen cost (OC) is commonly used to assess an athlete's running economy, although the validity of this measure is often overlooked. This study evaluated the validity of OC as a measure of running economy by comparison with the underlying energy cost (EC). In addition, the most appropriate method of removing the influence of body mass was determined to elucidate a measure of running economy that enables valid interindividual comparisons. One hundred and seventy-two highly trained endurance runners (males, n = 101; females, n = 71) performed a discontinuous submaximal running assessment, consisting of approximately seven 3-min stages (1 km·h increments), to determine the absolute OC (L·km) and EC (kcal·km) for the four speeds below lactate turn point. Comparisons between models revealed linear ratio scaling to be a more suitable method than power function scaling for removing the influence of body mass for both EC (males, R = 0.589 vs 0.588; females, R = 0.498 vs 0.482) and OC (males, R = 0.657 vs 0.652; females, R = 0.532 vs 0.531). There were stepwise increases in EC and RER with increments in running speed (both, P < 0.001). However, no differences were observed for OC across the four monitored speeds (P = 0.54). Although EC increased with running speed, OC was insensitive to changes in running speed and, therefore, does not appear to provide a valid index of the underlying EC of running, likely due to the inability of OC to account for variations in substrate use. Therefore, EC should be used as the primary measure of running economy, and for runners, an appropriate scaling with body mass is recommended.
Cosmology in one dimension: Vlasov dynamics.
Manfredi, Giovanni; Rouet, Jean-Louis; Miller, Bruce; Shiozawa, Yui
2016-04-01
Numerical simulations of self-gravitating systems are generally based on N-body codes, which solve the equations of motion of a large number of interacting particles. This approach suffers from poor statistical sampling in regions of low density. In contrast, Vlasov codes, by meshing the entire phase space, can reach higher accuracy irrespective of the density. Here, we perform one-dimensional Vlasov simulations of a long-standing cosmological problem, namely, the fractal properties of an expanding Einstein-de Sitter universe in Newtonian gravity. The N-body results are confirmed for high-density regions and extended to regions of low matter density, where the N-body approach usually fails.
Milanović, Zoran; Pantelić, Saša; Sporiš, Goran; Mohr, Magni; Krustrup, Peter
2015-01-01
The purpose of this study was to determine the effects of recreational soccer (SOC) compared to moderate-intensity continuous running (RUN) on all health-related physical fitness components in healthy untrained men. Sixty-nine participants were recruited and randomly assigned to one of three groups, of which sixty-four completed the study: a soccer training group (SOC; n = 20, 34±4 (means±SD) years, 78.1±8.3 kg, 179±4 cm); a running group (RUN; n = 21, 32±4 years, 78.0±5.5 kg, 179±7 cm); or a passive control group (CON; n = 23, 30±3 years, 76.6±12.0 kg, 178±8 cm). The training intervention lasted 12 weeks and consisted of three 60-min sessions per week. All participants were tested for each of the following physical fitness components: maximal aerobic power, minute ventilation, maximal heart rate, squat jump (SJ), countermovement jump with arm swing (CMJ), sit-and-reach flexibility, and body composition. Over the 12 weeks, VO2max relative to body weight increased more (p<0.05) in SOC (24.2%, ES = 1.20) and RUN (21.5%, ES = 1.17) than in CON (-5.0%, ES = -0.24), partly due to large changes in body mass (-5.9, -5.7 and +2.6 kg, p<0.05 for SOC, RUN and CON, respectively). Over the 12 weeks, SJ and CMJ performance increased more (p<0.05) in SOC (14.8 and 12.1%, ES = 1.08 and 0.81) than in RUN (3.3 and 3.0%, ES = 0.23 and 0.19) and CON (0.3 and 0.2%), while flexibility also increased more (p<0.05) in SOC (94%, ES = 0.97) than in RUN and CON (0–2%). In conclusion, untrained men displayed marked improvements in maximal aerobic power after 12 weeks of soccer training and moderate-intensity running, partly due to large decreases in body mass. Additionally soccer training induced pronounced positive effects on jump performance and flexibility, making soccer an effective broad-spectrum fitness training intervention. PMID:26305880
MPPhys—A many-particle simulation package for computational physics education
NASA Astrophysics Data System (ADS)
Müller, Thomas
2014-03-01
In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
Cummins, Cloe; McLean, Blake; Halaki, Mark; Orr, Rhonda
2017-07-01
To quantify the external training loads of positional groups in preseason training drills. Thirty-three elite rugby league players were categorized into 1 of 4 positional groups: outside backs (n = 9), adjustables (n = 9), wide-running forwards (n = 9), and hit-up forwards (n = 6). Data for 8 preseason weeks were collected using microtechnology devices. Training drills were classified based on drill focus: speed and agility, conditioning, and generic and positional skills. Total, high-speed, and very-high-speed distance decreased across the preseason in speed and agility (moderate, small, and small, respectively), conditioning (large, large, and small) and generic skills (large, large, and large). The duration of speed and generic skills also decreased (77% and 48%, respectively). This was matched by a concomitant increase in total distance (small), high-speed running (small), very-high-speed running (moderate), and 2-dimensional (2D) BodyLoad (small) demands in positional skills. In positional skills, hit-up forwards (1240 ± 386 m) completed less very-high-speed running than outside backs (2570 ± 1331 m) and adjustables (2121 ± 1163 m). Hit-up forwards (674 ± 253 AU) experienced greater 2D BodyLoad demands than outside backs (432 ± 230 AU, P = .034). In positional drills, hit-up forwards experienced greater relative 2D BodyLoad demands than outside backs (P = .015). Conversely, outside backs experienced greater relative high- (P = .007) and very-high-speed-running (P < .001) demands than hit-up forwards. Significant differences were observed in training loads between positional groups during positional skills but not in speed and agility, conditioning, and generic skills. This work also highlights the importance of different external-load parameters to adequately quantify workload across different positional groups.
ERIC Educational Resources Information Center
Beets, Michael W.; Pitetti, Kenneth H.; Cardinal, Bradley J.
2005-01-01
This study examined the cardiovascular fitness (CVF, Progressive Aerobic Cardiovascular Endurance Run [PACER], number of laps completed) and the prevalence of at risk of overweight (AR) and overweight (OW) among 10-15-year-olds (48% girls) from the following ethnic backgrounds: African American (n = 2,604), Asian-Pacific Islander (n = 3,888),…
Canham-Chervak, Michelle; Steelman, Ryan A; Schuh, Anna; Jones, Bruce H
2016-11-01
Injuries are a barrier to military medical readiness, and overexertion has historically been a leading mechanism of injury among active duty U.S. Army soldiers. Details are needed to inform prevention planning. The Defense Medical Surveillance System (DMSS) was queried for unique medical encounters among active duty Army soldiers consistent with the military injury definition and assigned an overexertion external cause code (ICD-9: E927.0-E927.9) in 2014 (n=21,891). Most (99.7%) were outpatient visits and 60% were attributed specifically to sudden strenuous movement. Among the 41% (n=9,061) of visits with an activity code (ICD-9: E001-E030), running was the most common activity (n=2,891, 32%); among the 19% (n=4,190) with a place of occurrence code (ICD-9: E849.0-E849.9), the leading location was recreation/sports facilities (n=1,332, 32%). External cause codes provide essential details, but the data represented less than 4% of all injury-related medical encounters among U.S. Army soldiers in 2014. Efforts to improve external cause coding are needed, and could be aligned with training on and enforcement of ICD-10 coding guidelines throughout the Military Health System.
PoMiN: A Post-Minkowskian N-Body Solver
NASA Astrophysics Data System (ADS)
Feng, Justin; Baumann, Mark; Hall, Bryton; Doss, Joel; Spencer, Lucas; Matzner, Richard
2018-05-01
PoMiN is a lightweight N-body code based on the Post-Minkowskian N-body Hamiltonian of Ledvinka, Schafer, and Bicak, which includes General Relativistic effects up to first order in Newton's constant G, and all orders in the speed of light c. PoMiN is a single file written in C and uses a fourth-order Runge-Kutta integration scheme. PoMiN has also been written to handle an arbitrary number of particles (both massive and massless) with a computational complexity that scales as O(N^2).
Matter power spectrum and the challenge of percent accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Aurel; Teyssier, Romain; Potter, Doug
2016-04-01
Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less
Optical Detection System Model.
1982-04-01
duration of the signal does not have to be a specific value: it can be a very large value. The ODS code will work on the signal in 1024 point...WAF:NING: 1HIS WILL SUFER ElE ANY EXILTING £EOUJENCE ,IT/Nl: Y .RUN WVINP THIS IS THE WAVE FORM SECTION. DO YOU WANT HELP? (Y/N): N ENTER WAVELENGTH (IN
Effects of restricted feeding schedules on circadian organization in squirrel monkeys
NASA Technical Reports Server (NTRS)
Boulos, Z.; Frim, D. M.; Dewey, L. K.; Moore-Ede, M. C.
1989-01-01
Free running circadian rhythms of motor activity, food-motivated lever-pressing, and either drinking (N = 7) or body temperature (N = 3) were recorded from 10 squirrel monkeys maintained in constant illumination with unlimited access to food. Food availability was then restricted to a single unsignaled 3-hour interval each day. The feeding schedule failed to entrain the activity rhythms of 8 monkeys, which continued to free-run. Drinking was almost completely synchronized by the schedule, while body temperature showed a feeding-induced rise superimposed on a free-running rhythm. Nonreinforced lever-pressing showed both a free-running component and a 24-hour component that anticipated the time of feeding. At the termination of the schedule, all recorded variables showed free-running rhythms, but in 3 animals the initial phase of the postschedule rhythms was advanced by several hours, suggesting relative coordination. Of the remaining 2 animals, one exhibited stable entrainment of all 3 recorded rhythms, while the other appeared to entrain temporarily to the feeding schedule. These results indicate that restricted feeding schedules are only a weak zeitgeber for the circadian pacemaker generating free-running rhythms in the squirrel monkey. Such schedules, however, may entrain a separate circadian system responsible for the timing of food-anticipatory changes in behavior and physiology.
NASA Astrophysics Data System (ADS)
Olsson, O.
2018-01-01
We present a novel heuristic derived from a probabilistic cost model for approximate N-body simulations. We show that this new heuristic can be used to guide tree construction towards higher quality trees with improved performance over current N-body codes. This represents an important step beyond the current practice of using spatial partitioning for N-body simulations, and enables adoption of a range of state-of-the-art algorithms developed for computer graphics applications to yield further improvements in N-body simulation performance. We outline directions for further developments and review the most promising such algorithms.
R & D GTDS SST: Code Flowcharts and Input
1995-01-01
trajectory from a given set of initial conditions Typical output is in the form of a printer le of Cartesian coordinates and Keplerian orbital ... orbiting the Earth The input data specied for an EPHEM run are i Initial elements and epoch ii Orbit generator selection iii Conversion of osculating...discussed ELEMENT sets coordinate system reference central body and rst components of initial state ELEMENT sets the second
The impact of galaxy geometry and mass evolution on the survival of star clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madrid, Juan P.; Hurley, Jarrod R.; Martig, Marie
2014-04-01
Direct N-body simulations of globular clusters in a realistic Milky-Way-like potential are carried out using the code NBODY6 to determine the impact of the host galaxy disk mass and geometry on the survival of star clusters. A relation between disk mass and star-cluster dissolution timescale is derived. These N-body models show that doubling the mass of the disk from 5 × 10{sup 10} M {sub ☉} to 10 × 10{sup 10} M {sub ☉} halves the dissolution time of a satellite star cluster orbiting the host galaxy at 6 kpc from the galactic center. Different geometries in a disk ofmore » identical mass can determine either the survival or dissolution of a star cluster orbiting within the inner 6 kpc of the galactic center. Furthermore, disk geometry has measurable effects on the mass loss of star clusters up to 15 kpc from the galactic center. N-body simulations performed with a fine output time step show that at each disk crossing the outer layers of star clusters experiences an increase in velocity dispersion of ∼5% of the average velocity dispersion in the outer section of star clusters. This leads to an enhancement of mass loss—a clearly discernable effect of disk shocking. By running models with different inclinations, we determine that star clusters with an orbit that is perpendicular to the Galactic plane have larger mass loss rates than do clusters that evolve in the Galactic plane or in an inclined orbit.« less
PMARC_12 - PANEL METHOD AMES RESEARCH CENTER, VERSION 12
NASA Technical Reports Server (NTRS)
Ashby, D. L.
1994-01-01
Panel method computer programs are software tools of moderate cost used for solving a wide range of engineering problems. The panel code PMARC_12 (Panel Method Ames Research Center, version 12) can compute the potential flow field around complex three-dimensional bodies such as complete aircraft models. PMARC_12 is a well-documented, highly structured code with an open architecture that facilitates modifications and the addition of new features. Adjustable arrays are used throughout the code, with dimensioning controlled by a set of parameter statements contained in an include file; thus, the size of the code (i.e. the number of panels that it can handle) can be changed very quickly. This allows the user to tailor PMARC_12 to specific problems and computer hardware constraints. In addition, PMARC_12 can be configured (through one of the parameter statements in the include file) so that the code's iterative matrix solver is run entirely in RAM, rather than reading a large matrix from disk at each iteration. This significantly increases the execution speed of the code, but it requires a large amount of RAM memory. PMARC_12 contains several advanced features, including internal flow modeling, a time-stepping wake model for simulating either steady or unsteady (including oscillatory) motions, a Trefftz plane induced drag computation, off-body and on-body streamline computations, and computation of boundary layer parameters using a two-dimensional integral boundary layer method along surface streamlines. In a panel method, the surface of the body over which the flow field is to be computed is represented by a set of panels. Singularities are distributed on the panels to perturb the flow field around the body surfaces. PMARC_12 uses constant strength source and doublet distributions over each panel, thus making it a low order panel method. Higher order panel methods allow the singularity strength to vary linearly or quadratically across each panel. Experience has shown that low order panel methods can provide nearly the same accuracy as higher order methods over a wide range of cases with significantly reduced computation times; hence, the low order formulation was adopted for PMARC_12. The flow problem is solved by modeling the body as a closed surface dividing space into two regions: the region external to the surface in which an unknown velocity potential exists representing the flow field of interest, and the region internal to the surface in which a known velocity potential (representing a fictitious flow) is prescribed as a boundary condition. Both velocity potentials are required to satisfy Laplace's equation. A surface integral equation for the unknown potential external to the surface can be written by applying Green's Theorem to the external region. Using the internal potential and zero flow through the surface as boundary conditions, the unknown potential external to the surface can be solved for. When the internal flow option, which allows the analysis of closed ducts, wind tunnels, and similar internal flow problems, is selected, the geometry is modeled such that the flow field of interest is inside the geometry and the fictitious flow is outside the geometry. Items such as wings, struts, or aircraft models can be included in the internal flow problem. The time-stepping wake model gives PMARC_12 the ability to model both steady and unsteady flow problems. The wake is convected downstream from the wake-separation line by the local velocity field. With each time step, a new row of wake panels is added to the wake at the wake-separation line. Time stepping can start from time t=0 (no initial wake) or from time t=t0 (an initial wake is specified). A wide range of motions can be prescribed, including constant rates of translation, constant rate of rotation about an arbitrary axis, oscillatory translation, and oscillatory rotation about any of the three coordinate axes. Investigators interested in a visual representation of the phenomenon they are studying with PMARC_12 may want to consider obtaining the program GVS (ARC-13361), the General Visualization System. GVS is a Silicon Graphics IRIS program which was created for the purpose of supporting the scientific visualization needs of PMARC_12. GVS is available separately from COSMIC. PMARC_12 is written in standard FORTRAN 77, with the exception of the NAMELIST extension used for input. This makes the code fairly machine independent. A compiler which supports the NAMELIST extension is required. The amount of free disk space and RAM memory required for PMARC_12 will vary depending on how the code is dimensioned using the parameter statements in the include file. The recommended minimum requirements are 20Mb of free disk space and 4Mb of RAM. PMARC_12 has been successfully implemented on a Macintosh II running System 6.0.7 or 7.0 (using MPW/Language Systems Fortran 3.0), a Sun SLC running SunOS 4.1.1, an HP 720 running HP-UX 8.07, an SGI IRIS running IRIX 4.0 (it will not run under IRIX 3.x.x without modifications), an IBM RS/6000 running AIX, a DECstation 3100 running ULTRIX, and a CRAY-YMP running UNICOS 6.0 or later. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines. The standard distribution medium for PMARC_12 is a set of three 3.5 inch 800K Macintosh format diskettes and one 3.5 inch 1.44Mb Macintosh format diskette which contains an electronic copy of the documentation in MS Word 5.0 format for the Macintosh. Alternate distribution media and formats are available upon request, but these will not include the electronic version of the document. No executables are included on the distribution media. This program is an update to PMARC version 11, which was released in 1989. PMARC_12 was released in 1993. It is available only for use by United States citizens.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
Doucet, Mathieu; Hobson, Tanner C.; Ferraz Leal, Ricardo Miguel
2017-08-01
The Django Remote Submission (DRS) is a Django (Django, n.d.) application to manage long running job submission, including starting the job, saving logs, and storing results. It is an independent project available as a standalone pypi package (PyPi, n.d.). It can be easily integrated in any Django project. The source code is freely available as a GitHub repository (django-remote-submission, n.d.). To run the jobs in background, DRS takes advantage of Celery (Celery, n.d.), a powerful asynchronous job queue used for running tasks in the background, and the Redis Server (Redis, n.d.), an in-memory data structure store. Celery uses brokers tomore » pass messages between a Django Project and the Celery workers. Redis is the message broker of DRS. In addition DRS provides real time monitoring of the progress of Jobs and associated logs. Through the Django Channels project (Channels, n.d.), and the usage of Web Sockets, it is possible to asynchronously display the Job Status and the live Job output (standard output and standard error) on a web page.« less
NASA Astrophysics Data System (ADS)
Hickey, M. S.
2008-05-01
Controlled-source electromagnetic geophysical methods provide a noninvasive means of characterizing subsurface structure. In order to properly model the geologic subsurface with a controlled-source time domain electromagnetic (TDEM) system in an extreme topographic environment we must first see the effects of topography on the forward model data. I run simulations using the Texas A&M University (TAMU) finite element (FEM) code in which I include true 3D topography. From these models we see the limits of how much topography we can include before our forward model can no longer give us accurate data output. The simulations are based on a model of a geologic half space with no cultural noise and focus on topography changes associated with impact crater sites, such as crater rims and central uplift. Several topographical variations of the model are run but the main constant is that there is only a small conductivity change on the range of 10-1 s/m between the host medium and the geologic body within. Asking the following questions will guide us through determining the limits of our code: What is the maximum step we can have before we see fringe effects in our data? At what location relative to the body does the topography cause the most effect? After we know the limits of the code we can develop new methods to increase the limits that will allow us to better image the subsurface using TDEM in extreme topography.
Ruan, Jesse S; El-Jawahri, Raed; Rouhana, Stephen W; Barbat, Saeed; Prasad, Priya
2006-11-01
The biofidelity of the Ford Motor Company human body finite element (FE) model in side impact simulations was analyzed and evaluated following the procedures outlined in ISO technical report TR9790. This FE model, representing a 50th percentile adult male, was used to simulate the biomechanical impact tests described in ISO-TR9790. These laboratory tests were considered as suitable for assessing the lateral impact biofidelity of the head, neck, shoulder, thorax, abdomen, and pelvis of crash test dummies, subcomponent test devices, and math models that are used to represent a 50th percentile adult male. The simulated impact responses of the head, neck, shoulder, thorax, abdomen, and pelvis of the FE model were compared with the PMHS (Post Mortem Human Subject) data upon which the response requirements for side impact surrogates was based. An overall biofidelity rating of the human body FE model was determined using the ISO-TR9790 rating method. The resulting rating for the human body FE model was 8.5 on a 0 to 10 scale with 8.6-10 being excellent biofidelity. In addition, in order to explore whether there is a dependency of the impact responses of the FE model on different analysis codes, three commercially available analysis codes, namely, LS-DYNA, Pamcrash, and Radioss were used to run the human body FE model. Effects of these codes on biofidelity when compared with ISO-TR9790 data are discussed. Model robustness and numerical issues arising with three different code simulations are also discussed.
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
NASA Astrophysics Data System (ADS)
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Baryonic impact on the dark matter orbital properties of Milky Way-sized haloes
NASA Astrophysics Data System (ADS)
Zhu, Qirong; Hernquist, Lars; Marinacci, Federico; Springel, Volker; Li, Yuexing
2017-04-01
We study the orbital properties of dark matter haloes by combining a spectral method and cosmological simulations of Milky Way-sized Galaxies. We compare the dynamics and orbits of individual dark matter particles from both hydrodynamic and N-body simulations, and find that the fraction of box, tube and resonant orbits of the dark matter halo decreases significantly due to the effects of baryons. In particular, the central region of the dark matter halo in the hydrodynamic simulation is dominated by regular, short-axis tube orbits, in contrast to the chaotic, box and thin orbits dominant in the N-body run. This leads to a more spherical dark matter halo in the hydrodynamic run compared to a prolate one as commonly seen in the N-body simulations. Furthermore, by using a kernel-based density estimator, we compare the coarse-grained phase-space densities of dark matter haloes in both simulations and find that it is lower by ˜0.5 dex in the hydrodynamic run due to changes in the angular momentum distribution, which indicates that the baryonic process that affects the dark matter is irreversible. Our results imply that baryons play an important role in determining the shape, kinematics and phase-space density of dark matter haloes in galaxies.
Studying Tidal Effects In Planetary Systems With Posidonius. A N-Body Simulator Written In Rust.
NASA Astrophysics Data System (ADS)
Blanco-Cuaresma, Sergi; Bolmont, Emeline
2017-10-01
Planetary systems with several planets in compact orbital configurations such as TRAPPIST-1 are surely affected by tidal effects. Its study provides us with important insight about its evolution. We developed a second generation of a N-body code based on the tidal model used in Mercury-T, re-implementing and improving its functionalities using Rust as programming language (including a Python interface for easy use) and the WHFAST integrator. The new open source code ensures memory safety, reproducibility of numerical N-body experiments, it improves the spin integration compared to Mercury-T and allows to take into account a new prescription for the dissipation of tidal inertial waves in the convective envelope of stars. Posidonius is also suitable for binary system simulations with evolving stars.
Methodology for fast detection of false sharing in threaded scientific codes
Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang
2014-11-25
A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.
FLY MPI-2: a parallel tree code for LSS
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.
2006-04-01
New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors
Efficient molecular dynamics simulations with many-body potentials on graphics processing units
NASA Astrophysics Data System (ADS)
Fan, Zheyong; Chen, Wei; Vierimaa, Ville; Harju, Ari
2017-09-01
Graphics processing units have been extensively used to accelerate classical molecular dynamics simulations. However, there is much less progress on the acceleration of force evaluations for many-body potentials compared to pairwise ones. In the conventional force evaluation algorithm for many-body potentials, the force, virial stress, and heat current for a given atom are accumulated within different loops, which could result in write conflict between different threads in a CUDA kernel. In this work, we provide a new force evaluation algorithm, which is based on an explicit pairwise force expression for many-body potentials derived recently (Fan et al., 2015). In our algorithm, the force, virial stress, and heat current for a given atom can be accumulated within a single thread and is free of write conflicts. We discuss the formulations and algorithms and evaluate their performance. A new open-source code, GPUMD, is developed based on the proposed formulations. For the Tersoff many-body potential, the double precision performance of GPUMD using a Tesla K40 card is equivalent to that of the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular dynamics code running with about 100 CPU cores (Intel Xeon CPU X5670 @ 2.93 GHz).
Comparison of a Simple Patched Conic Trajectory Code to Commercially Available Software
NASA Technical Reports Server (NTRS)
AndersonPark, Brooke M.; Wright, Henry S.
2007-01-01
Often in spaceflight proposal development, mission designers must eva luate numerous trajectories as different design factors are investiga ted. Although there are numerous commercial software packages availab le to help develop and analyze trajectories, most take a significant amount of time to develop the trajectory itself, which isn't effectiv e when working on proposals. Thus a new code, PatCon, which is both q uick and easy to use, was developed to aid mission designers to condu ct trade studies on launch and arrival times for any given target pla net. The code is able to run quick analyses, due to the incorporation of the patched conic approximation, to determine the trajectory. PatCon provides a simple but accurate approximation of the four body moti on problem that would be needed to solve any planetary trajectory. P atCon has been compared to a patched conic test case for verification, with limited validation or comparison with other COTS software. This paper describes the patched conic technique and its implementation i n PatCon. A description of the results and comparison of PatCon to ot her more evolved codes such as AGI#s Satellite Tool Kit and JAQAR As trodynamics# Swingby Calculator is provided. The results will include percent differences in values such as C3 numbers, and Vinfinity at a rrival, and other more subjective results such as the time it takes to build the simulation, and actual calculation time.
ACON: a multipurpose production controller for plasma physics codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snell, C.
1983-01-01
ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less
Classification Techniques for Digital Map Compression
1989-03-01
classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the
Soft tissues store and return mechanical energy in human running.
Riddick, R C; Kuo, A D
2016-02-08
During human running, softer parts of the body may deform under load and dissipate mechanical energy. Although tissues such as the heel pad have been characterized individually, the aggregate work performed by all soft tissues during running is unknown. We therefore estimated the work performed by soft tissues (N=8 healthy adults) at running speeds ranging 2-5 m s(-1), computed as the difference between joint work performed on rigid segments, and whole-body estimates of work performed on the (non-rigid) body center of mass (COM) and peripheral to the COM. Soft tissues performed aggregate negative work, with magnitude increasing linearly with speed. The amount was about -19 J per stance phase at a nominal 3 m s(-1), accounting for more than 25% of stance phase negative work performed by the entire body. Fluctuations in soft tissue mechanical power over time resembled a damped oscillation starting at ground contact, with peak negative power comparable to that for the knee joint (about -500 W). Even the positive work from soft tissue rebound was significant, about 13 J per stance phase (about 17% of the positive work of the entire body). Assuming that the net dissipative work is offset by an equal amount of active, positive muscle work performed at 25% efficiency, soft tissue dissipation could account for about 29% of the net metabolic expenditure for running at 5 m s(-1). During running, soft tissue deformations dissipate mechanical energy that must be offset by active muscle work at non-negligible metabolic cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
Testing and Validating Gadget2 for GPUs
NASA Astrophysics Data System (ADS)
Wibking, Benjamin; Holley-Bockelmann, K.; Berlind, A. A.
2013-01-01
We are currently upgrading a version of Gadget2 (Springel et al., 2005) that is optimized for NVIDIA's CUDA GPU architecture (Frigaard, unpublished) to work with the latest libraries and graphics cards. Preliminary tests of its performance indicate a ~40x speedup in the particle force tree approximation calculation, with overall speedup of 5-10x for cosmological simulations run with GPUs compared to running on the same CPU cores without GPU acceleration. We believe this speedup can be reasonably increased by an additional factor of two with futher optimization, including overlap of computation on CPU and GPU. Tests of single-precision GPU numerical fidelity currently indicate accuracy of the mass function and the spectral power density to within a few percent of extended-precision CPU results with the unmodified form of Gadget. Additionally, we plan to test and optimize the GPU code for Millenium-scale "grand challenge" simulations of >10^9 particles, a scale that has been previously untested with this code, with the aid of the NSF XSEDE flagship GPU-based supercomputing cluster codenamed "Keeneland." Current work involves additional validation of numerical results, extending the numerical precision of the GPU calculations to double precision, and evaluating performance/accuracy tradeoffs. We believe that this project, if successful, will yield substantial computational performance benefits to the N-body research community as the next generation of GPU supercomputing resources becomes available, both increasing the electrical power efficiency of ever-larger computations (making simulations possible a decade from now at scales and resolutions unavailable today) and accelerating the pace of research in the field.
The effects of wearing undersized lower-body compression garments on endurance running performance.
Dascombe, Ben J; Hoare, Trent K; Sear, Joshua A; Reaburn, Peter R; Scanlan, Aaron T
2011-06-01
To examine whether wearing various size lower-body compression garments improves physiological and performance parameters related to endurance running in well-trained athletes. Eleven well-trained middle-distance runners and triathletes (age: 28.4 ± 10.0 y; height: 177.3 ± 4.7 cm; body mass: 72.6 ± 8.0 kg; VO2max: 59.0 ± 6.7 mL·kg-1·min-1) completed repeat progressive maximal tests (PMT) and time-to-exhaustion (TTE) tests at 90% VO2max wearing either manufacturer-recommended LBCG (rLBCG), undersized LBCG (uLBCG), or loose running shorts (CONT). During all exercise testing, several systemic and peripheral physiological measures were taken. The results indicated similar effects of wearing rLBCG and uLBCG compared with the control. Across the PMT, wearing either LBCG resulted in significantly (P < .05) increased oxygen consumption, O2 pulse, and deoxyhemoglobin (HHb) and decreased running economy, oxyhemoglobin, and tissue oxygenation index (TOI) at low-intensity speeds (8-10 km·h-1). At higher speeds (12-18 km·h-1), wearing LBCG increased regional blood flow (nTHI) and HHb values, but significantly lowered heart rate and TOI. During the TTE, wearing either LBCG significantly (P < .05) increased HHb concentration, whereas wearing uLBCG also significantly (P < .05) increased nTHI. No improvement in endurance running performance was observed in either compression condition. The results suggest that wearing LBCG facilitated a small number of cardiorespiratory and peripheral physiological benefits that appeared mostly related to improvements in venous flow. However, these improvements appear trivial to athletes, as they did not correspond to any improvement in endurance running performance.
Pressure Ratio to Thermal Environments
NASA Technical Reports Server (NTRS)
Lopez, Pedro; Wang, Winston
2012-01-01
A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
User's manual: Subsonic/supersonic advanced panel pilot code
NASA Technical Reports Server (NTRS)
Moran, J.; Tinoco, E. N.; Johnson, F. T.
1978-01-01
Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.
Relativistic initial conditions for N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less
JANUS: a bit-wise reversible integrator for N-body dynamics
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2018-01-01
Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.
Molecular dynamics study of Al and Ni 3Al sputtering by Al clusters bombardment
NASA Astrophysics Data System (ADS)
Zhurkin, Eugeni E.; Kolesnikov, Anton S.
2002-06-01
The sputtering of Al and Ni 3Al (1 0 0) surfaces induced by impact of Al ions and Al N clusters ( N=2,4,6,9,13,55) with energies of 100 and 500 eV/atom is studied at atomic scale by means of classical molecular dynamics (MD). The MD code we used implements many-body tight binding potential splined to ZBL at short distances. Special attention has been paid to model dense cascades: we used quite big computation cells with lateral periodic and damped boundary conditions. In addition, long simulation times (10-25 ps) and representative statistics (up to 1000 runs per each case) were considered. The total sputtering yields, energy and time spectrums of sputtered particles, as well as preferential sputtering of compound target were analyzed, both in the linear and non-linear regimes. The significant "cluster enhancement" of sputtering yield was found for cluster sizes N⩾13. In parallel, we estimated collision cascade features depending on cluster size in order to interpret the nature of observed non-linear effects.
Camic, Clayton L; Housh, Terry J; Zuniga, Jorge M; Traylor, Daniel A; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O; Housh, Dona J
2014-03-01
The purpose of this study was to examine the effects of 28 days of polyethylene glycosylated creatine (PEG-creatine) supplementation (1.25 and 2.50 g·d) on anaerobic performance measures (vertical and broad jumps, 40-yard dash, 20-yard shuttle run, and 3-cone drill), upper- and lower-body muscular strength and endurance (bench press and leg extension), and body composition. This study used a randomized, double-blind, placebo-controlled parallel design. Seventy-seven adult men (mean age ± SD, 22.1 ± 2.5 years; body mass, 81.7 ± 10.8 kg) volunteered to participate and were randomly assigned to a placebo (n = 23), 1.25 g·d of PEG-creatine (n = 27), or 2.50 g·d of PEG-creatine (n = 27) group. The subjects performed anaerobic performance measures, muscular strength (one-repetition maximum [1RM]), and endurance (80% 1RM) tests for bench press and leg extension, and underwater weighing for the determination of body composition at day 0 (baseline), day 14, and day 28. The results indicated that there were improvements (p < 0.0167) in vertical jump, 20-yard shuttle run, 3-cone drill, muscular endurance for bench press, and body mass for at least one of the PEG-creatine groups without changes for the placebo group. Thus, the present results demonstrated that PEG-creatine supplementation at 1.25 or 2.50 g·d had an ergogenic effect on lower-body vertical power, agility, change-of-direction ability, upper-body muscular endurance, and body mass.
Mikus, Catherine R; Rector, R Scott; Arce-Esquivel, Arturo A; Libla, Jessica L; Booth, Frank W; Ibdah, Jamal A; Laughlin, M Harold; Thyfault, John P
2010-10-01
Insulin-mediated glucose disposal is dependent on the vasodilator effects of insulin. In type 2 diabetes, insulin-stimulated vasodilation is impaired as a result of an imbalance in NO and ET-1 production. We tested the hypothesis that chronic voluntary wheel running (RUN) prevents impairments in insulin-stimulated vasodilation associated with obesity and type 2 diabetes independent of the effects of RUN on adiposity by randomizing Otsuka Long Evans Tokushima Fatty (OLETF) rats, a model of hyperphagia-induced obesity and type 2 diabetes, to 1) RUN, 2) caloric restriction (CR; diet adjusted to match body weights of RUN group), or 3) sedentary control (SED) groups (n = 8/group) at 4 wk. At 40 wk, NO- and ET-1-mediated vasoreactivity to insulin (1-1,000 μIU/ml) was assessed in the presence of a nonselective ET-1 receptor blocker (tezosentan) or a NO synthase (NOS) inhibitor [N(G)-nitro-L-arginine methyl ester (L-NAME)], respectively, in second-order arterioles isolated from the white portion of the gastrocnemius muscle. Body weight, fasting plasma glucose, and hemoglobin A1c were lower in RUN and CR than SED (P < 0.05); however, the glucose area under the curve (AUC) following the intraperitoneal glucose tolerance test was lower only in the RUN group (P < 0.05). Vasodilator responses to all doses of insulin were greater in RUN than SED or CR in the presence of a tezosentan (P < 0.05), but group differences in vasoreactivity to insulin with coadministration of L-NAME were not observed. We conclude daily wheel running prevents obesity and type 2 diabetes-associated declines in insulin-stimulated vasodilation in skeletal muscle arterioles through mechanisms that appear to be NO mediated and independent of attenuating excess adiposity in hyperphagic rats.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
NASA Technical Reports Server (NTRS)
Park, Brooke Anderson; Wright, Henry
2012-01-01
PatCon code was developed to help mission designers run trade studies on launch and arrival times for any given planet. Initially developed in Fortran, the required inputs included launch date, arrival date, and other orbital parameters of the launch planet and arrival planets at the given dates. These parameters include the position of the planets, the eccentricity, semi-major axes, argument of periapsis, ascending node, and inclination of the planets. With these inputs, a patched conic approximation is used to determine the trajectory. The patched conic approximation divides the planetary mission into three parts: (1) the departure phase, in which the two relevant bodies are Earth and the spacecraft, and where the trajectory is a departure hyperbola with Earth at the focus; (2) the cruise phase, in which the two bodies are the Sun and the spacecraft, and where the trajectory is a transfer ellipse with the Sun at the focus; and (3) the arrival phase, in which the two bodies are the target planet and the spacecraft, where the trajectory is an arrival hyperbola with the planet as the focus.
Long-Term Marathon Running Is Associated with Low Coronary Plaque Formation in Women.
Roberts, William O; Schwartz, Robert S; Kraus, Stacia Merkel; Schwartz, Jonathan G; Peichel, Gretchen; Garberich, Ross F; Lesser, John R; Oesterle, Stephen N; Wickstrom, Kelly K; Knickelbine, Thomas; Harris, Kevin M
2017-04-01
Marathon running is presumed to improve cardiovascular risk, but health benefits of high volume running are unknown. High-resolution coronary computed tomography angiography and cardiac risk factor assessment were completed in women with long-term marathon running histories to compare to sedentary women with similar risk factors. Women who had run at least one marathon per year for 10-25 yr underwent coronary computed tomography angiography, 12-lead ECG, blood pressure and heart rate measurement, lipid panel, and a demographic/health risk factor survey. Sedentary matched controls were derived from a contemporaneous clinical study database. CT scans were analyzed for calcified and noncalcified plaque prevalence, volume, stenosis severity, and calcium score. Women marathon runners (n = 26), age 42-82 yr, with combined 1217 marathons (average 47) exhibited significantly lower coronary plaque prevalence and less calcific plaque volume. The marathon runners also had less risk factors (smoking, hypertension, and hyperlipidemia); significantly lower resting heart rate, body weight, body mass index, and triglyceride levels; and higher high-density lipoprotein cholesterol levels compared with controls (n = 28). The five women runners with coronary plaque had run marathons for more years and were on average 12 yr older (65 vs 53) than the runners without plaque. Women marathon runners had minimal coronary artery calcium counts, lower coronary artery plaque prevalence, and less calcified plaque volume compared with sedentary women. Developing coronary artery plaque in long-term women marathon runners appears related to older age and more cardiac risk factors, although the runners with coronary artery plaque had accumulated significantly more years running marathons.
Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches.
Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac
2016-07-01
We provide detailed comparison between the adaptive mesh refinement (AMR) code enzo-2.4 and the smoothed particle hydrodynamics (SPH)/ N -body code gadget-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in gadget-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H 2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ∼ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, gadget-3 requires significantly larger computational resources than enzo-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.
Direct collapse to supermassive black hole seeds: comparing the AMR and SPH approaches
NASA Astrophysics Data System (ADS)
Luo, Yang; Nagamine, Kentaro; Shlosman, Isaac
2016-07-01
We provide detailed comparison between the adaptive mesh refinement (AMR) code ENZO-2.4 and the smoothed particle hydrodynamics (SPH)/N-body code GADGET-3 in the context of isolated or cosmological direct baryonic collapse within dark matter (DM) haloes to form supermassive black holes. Gas flow is examined by following evolution of basic parameters of accretion flows. Both codes show an overall agreement in the general features of the collapse; however, many subtle differences exist. For isolated models, the codes increase their spatial and mass resolutions at different pace, which leads to substantially earlier collapse in SPH than in AMR cases due to higher gravitational resolution in GADGET-3. In cosmological runs, the AMR develops a slightly higher baryonic resolution than SPH during halo growth via cold accretion permeated by mergers. Still, both codes agree in the build-up of DM and baryonic structures. However, with the onset of collapse, this difference in mass and spatial resolution is amplified, so evolution of SPH models begins to lag behind. Such a delay can have effect on formation/destruction rate of H2 due to UV background, and on basic properties of host haloes. Finally, isolated non-cosmological models in spinning haloes, with spin parameter λ ˜ 0.01-0.07, show delayed collapse for greater λ, but pace of this increase is faster for AMR. Within our simulation set-up, GADGET-3 requires significantly larger computational resources than ENZO-2.4 during collapse, and needs similar resources, during the pre-collapse, cosmological structure formation phase. Yet it benefits from substantially higher gravitational force and hydrodynamic resolutions, except at the end of collapse.
Plasma Interactions with Spacecraft. Volume 1
2011-04-15
64-bit MacOS X environments. N2kScriptRunner, a C++ code that runs a Nascap-2k script outside of the Java user interface, was created. Using...Default Script and Original INIVEL Velocity Initialization ..........................................................15 Figure 6. Potentials at 25 µs...Current (Right Scale) Using Default Script and Modified INIVEL Velocity Initialization ........................................................16
Should Body Size Categories Be More Common in Endurance Running Events?
Buresh, Robert
2018-05-01
Thousands of endurance running events are held each year in the United States, and most of them use age and sex categories to account for documented effects of those factors on running performance. However, most running events do not provide categories of body mass, despite abundant evidence that it, too, dramatically influences endurance running performance. The purposes of this article are to (1) discuss how body mass affects endurance running performance, (2) explain several mechanisms through which body mass influences endurance running performance, and (3) suggest possible ways in which body mass might be categorized in endurance running events.
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.
Simulated Raman Spectral Analysis of Organic Molecules
NASA Astrophysics Data System (ADS)
Lu, Lu
The advent of the laser technology in the 1960s solved the main difficulty of Raman spectroscopy, resulted in simplified Raman spectroscopy instruments and also boosted the sensitivity of the technique. Up till now, Raman spectroscopy is commonly used in chemistry and biology. As vibrational information is specific to the chemical bonds, Raman spectroscopy provides fingerprints to identify the type of molecules in the sample. In this thesis, we simulate the Raman Spectrum of organic and inorganic materials by General Atomic and Molecular Electronic Structure System (GAMESS) and Gaussian, two computational codes that perform several general chemistry calculations. We run these codes on our CPU-based high-performance cluster (HPC). Through the message passing interface (MPI), a standardized and portable message-passing system which can make the codes run in parallel, we are able to decrease the amount of time for computation and increase the sizes and capacities of systems simulated by the codes. From our simulations, we will set up a database that allows search algorithm to quickly identify N-H and O-H bonds in different materials. Our ultimate goal is to analyze and identify the spectra of organic matter compositions from meteorites and compared these spectra with terrestrial biologically-produced amino acids and residues.
An Advanced N -body Model for Interacting Multiple Stellar Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brož, Miroslav
We construct an advanced model for interacting multiple stellar systems in which we compute all trajectories with a numerical N -body integrator, namely the Bulirsch–Stoer from the SWIFT package. We can then derive various observables: astrometric positions, radial velocities, minima timings (TTVs), eclipse durations, interferometric visibilities, closure phases, synthetic spectra, spectral energy distribution, and even complete light curves. We use a modified version of the Wilson–Devinney code for the latter, in which the instantaneous true phase and inclination of the eclipsing binary are governed by the N -body integration. If all of these types of observations are at one’s disposal,more » a joint χ {sup 2} metric and an optimization algorithm (a simplex or simulated annealing) allow one to search for a global minimum and construct very robust models of stellar systems. At the same time, our N -body model is free from artifacts that may arise if mutual gravitational interactions among all components are not self-consistently accounted for. Finally, we present a number of examples showing dynamical effects that can be studied with our code and we discuss how systematic errors may affect the results (and how to prevent this from happening).« less
Maximum likelihood decoding of Reed Solomon Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudan, M.
We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less
NASA Astrophysics Data System (ADS)
Baushev, A. N.; del Valle, L.; Campusano, L. E.; Escala, A.; Muñoz, R. R.; Palma, G. A.
2017-05-01
Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals of motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a `core-cusp problem' can properly be used to question the validity of the CDM model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baushev, A.N.; Valle, L. del; Campusano, L.E.
2017-05-01
Galaxy observations and N-body cosmological simulations produce conflicting dark matter halo density profiles for galaxy central regions. While simulations suggest a cuspy and universal density profile (UDP) of this region, the majority of observations favor variable profiles with a core in the center. In this paper, we investigate the convergency of standard N-body simulations, especially in the cusp region, following the approach proposed by [1]. We simulate the well known Hernquist model using the SPH code Gadget-3 and consider the full array of dynamical parameters of the particles. We find that, although the cuspy profile is stable, all integrals ofmore » motion characterizing individual particles suffer strong unphysical variations along the whole halo, revealing an effective interaction between the test bodies. This result casts doubts on the reliability of the velocity distribution function obtained in the simulations. Moreover, we find unphysical Fokker-Planck streams of particles in the cusp region. The same streams should appear in cosmological N-body simulations, being strong enough to change the shape of the cusp or even to create it. Our analysis, based on the Hernquist model and the standard SPH code, strongly suggests that the UDPs generally found by the cosmological N-body simulations may be a consequence of numerical effects. A much better understanding of the N-body simulation convergency is necessary before a 'core-cusp problem' can properly be used to question the validity of the CDM model.« less
Studies of Planet Formation Using a Hybrid N-Body + Planetesimal Code
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.
2004-01-01
The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: (1) icy planets: models for icy planet formation will demonstrate how the physical properties of debris disks - including the Kuiper Belt in our solar system - depend on initial conditions and input physics; and (2) terrestrial planets: calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Yoshikawa, Kohji; Okamoto, Takashi; Nitadori, Keigo
2012-02-01
We present a high-performance N-body code for self-gravitating collisional systems accelerated with the aid of a new SIMD instruction set extension of the x86 architecture: Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). With one processor core of Intel Core i7-2600 processor (8 MB cache and 3.40 GHz) based on Sandy Bridge micro-architecture, we implemented a fourth-order Hermite scheme with individual timestep scheme ( Makino and Aarseth, 1992), and achieved the performance of ˜20 giga floating point number operations per second (GFLOPS) for double-precision accuracy, which is two times and five times higher than that of the previously developed code implemented with the SSE instructions ( Nitadori et al., 2006b), and that of a code implemented without any explicit use of SIMD instructions with the same processor core, respectively. We have parallelized the code by using so-called NINJA scheme ( Nitadori et al., 2006a), and achieved ˜90 GFLOPS for a system containing more than N = 8192 particles with 8 MPI processes on four cores. We expect to achieve about 10 tera FLOPS (TFLOPS) for a self-gravitating collisional system with N ˜ 10 5 on massively parallel systems with at most 800 cores with Sandy Bridge micro-architecture. This performance will be comparable to that of Graphic Processing Unit (GPU) cluster systems, such as the one with about 200 Tesla C1070 GPUs ( Spurzem et al., 2010). This paper offers an alternative to collisional N-body simulations with GRAPEs and GPUs.
Exercise volume and intensity: a dose-response relationship with health benefits.
Foulds, Heather J A; Bredin, Shannon S D; Charlesworth, Sarah A; Ivey, Adam C; Warburton, Darren E R
2014-08-01
The health benefits of exercise are well established. However, the relationship between exercise volume and intensity and health benefits remains unclear, particularly the benefits of low-volume and intensity exercise. The primary purpose of this investigation was, therefore, to examine the dose-response relationship between exercise volume and intensity with derived health benefits including volumes and intensity of activity well below international recommendations. Generally healthy, active participants (n = 72; age = 44 ± 13 years) were assigned randomly to control (n = 10) or one of five 13-week exercise programs: (1) 10-min brisk walking 1×/week (n = 10), (2) 10-min brisk walking 3×/week (n = 10), (3) 30-min brisk walking 3×/week (n = 18), (4) 60-min brisk walking 3×/week (n = 10), and (5) 30-min running 3×/week (n = 14), in addition to their regular physical activity. Health measures evaluated pre- and post-training including blood pressure, body composition, fasting lipids and glucose, and maximal aerobic power (VO2max). Health improvements were observed among programs at least 30 min in duration, including body composition and VO2max: 30-min walking 28.8-34.5 mL kg(-1) min(-1), 60-min walking 25.1-28.9 mL kg(-1) min(-1), and 30-min running 32.4-36.4 mL kg(-1) min(-1). The greater intensity running program also demonstrated improvements in triglycerides. In healthy active individuals, a physical activity program of at least 30 min in duration for three sessions/per week is associated with consistent improvements in health status.
Magnetosphere simulations with a high-performance 3D AMR MHD Code
NASA Astrophysics Data System (ADS)
Gombosi, Tamas; Dezeeuw, Darren; Groth, Clinton; Powell, Kenneth; Song, Paul
1998-11-01
BATS-R-US is a high-performance 3D AMR MHD code for space physics applications running on massively parallel supercomputers. In BATS-R-US the electromagnetic and fluid equations are solved with a high-resolution upwind numerical scheme in a tightly coupled manner. The code is very robust and it is capable of spanning a wide range of plasma parameters (such as β, acoustic and Alfvénic Mach numbers). Our code is highly scalable: it achieved a sustained performance of 233 GFLOPS on a Cray T3E-1200 supercomputer with 1024 PEs. This talk reports results from the BATS-R-US code for the GGCM (Geospace General Circularculation Model) Phase 1 Standard Model Suite. This model suite contains 10 different steady-state configurations: 5 IMF clock angles (north, south, and three equally spaced angles in- between) with 2 IMF field strengths for each angle (5 nT and 10 nT). The other parameters are: solar wind speed =400 km/sec; solar wind number density = 5 protons/cc; Hall conductance = 0; Pedersen conductance = 5 S; parallel conductivity = ∞.
Rearfoot striking runners are more economical than midfoot strikers.
Ogueta-Alday, Ana; Rodríguez-Marroyo, José Antonio; García-López, Juan
2014-03-01
This study aimed to analyze the influence of foot strike pattern on running economy and biomechanical characteristics in subelite runners with a similar performance level. Twenty subelite long-distance runners participated and were divided into two groups according to their foot strike pattern: rearfoot (RF, n = 10) and midfoot (MF, n = 10) strikers. Anthropometric characteristics were measured (height, body mass, body mass index, skinfolds, circumferences, and lengths); physiological (VO2max, anaerobic threshold, and running economy) and biomechanical characteristics (contact and flight times, step rate, and step length) were registered during both incremental and submaximal tests on a treadmill. There were no significant intergroup differences in anthropometrics, VO2max, or anaerobic threshold measures. RF strikers were 5.4%, 9.3%, and 5.0% more economical than MF at submaximal speeds (11, 13, and 15 km·h respectively, although the difference was not significant at 15 km·h, P = 0.07). Step rate and step length were not different between groups, but RF showed longer contact time (P < 0.01) and shorter flight time (P < 0.01) than MF at all running speeds. The present study showed that habitually rearfoot striking runners are more economical than midfoot strikers. Foot strike pattern affected both contact and flight times, which may explain the differences in running economy.
Gravitational Effects upon Locomotion Posture
NASA Technical Reports Server (NTRS)
DeWitt, John K.; Bentley, Jason R.; Edwards, W. Brent; Perusek, Gail P.; Samorezov, Sergey
2008-01-01
Researchers use actual microgravity (AM) during parabolic flight and simulated microgravity (SM) obtained with horizontal suspension analogs to better understand the effect of gravity upon gait. In both environments, the gravitational force is replaced by an external load (EL) that returns the subject to the treadmill. However, when compared to normal gravity (N), researchers consistently find reduced ground reaction forces (GRF) and subtle kinematic differences (Schaffner et al., 2005). On the International Space Station, the EL is applied by elastic bungees attached to a waist and shoulder harness. While bungees can provide EL approaching body weight (BW), their force-length characteristics coupled with vertical oscillations of the body during gait result in a variable load. However, during locomotion in N, the EL is consistently equal to 100% body weight. Comparisons between AM and N have shown that during running, GRF are decreased in AM (Schaffner et al, 2005). Kinematic evaluations in the past have focussed on joint range of motion rather than joint posture at specific instances of the gait cycle. The reduced GRF in microgravity may be a result of differing hip, knee, and ankle positions during contact. The purpose of this investigation was to compare joint angles of the lower extremities during walking and running in AM, SM, and N. We hypothesized that in AM and SM, joints would be more flexed at heel strike (HS), mid-stance (MS) and toe-off (TO) than in N.
Performance analysis of parallel gravitational N-body codes on large GPU clusters
NASA Astrophysics Data System (ADS)
Huang, Si-Yi; Spurzem, Rainer; Berczik, Peter
2016-01-01
We compare the performance of two very different parallel gravitational N-body codes for astrophysical simulations on large Graphics Processing Unit (GPU) clusters, both of which are pioneers in their own fields as well as on certain mutual scales - NBODY6++ and Bonsai. We carry out benchmarks of the two codes by analyzing their performance, accuracy and efficiency through the modeling of structure decomposition and timing measurements. We find that both codes are heavily optimized to leverage the computational potential of GPUs as their performance has approached half of the maximum single precision performance of the underlying GPU cards. With such performance we predict that a speed-up of 200 - 300 can be achieved when up to 1k processors and GPUs are employed simultaneously. We discuss the quantitative information about comparisons of the two codes, finding that in the same cases Bonsai adopts larger time steps as well as larger relative energy errors than NBODY6++, typically ranging from 10 - 50 times larger, depending on the chosen parameters of the codes. Although the two codes are built for different astrophysical applications, in specified conditions they may overlap in performance at certain physical scales, thus allowing the user to choose either one by fine-tuning parameters accordingly.
CASL VMA FY16 Milestone Report (L3:VMA.VUQ.P13.07) Westinghouse Mixing with COBRA-TF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
2016-09-30
COBRA-TF (CTF) is a low-resolution code currently maintained as CASL's subchannel analysis tool. CTF operates as a two-phase, compressible code over a mesh comprised of subchannels and axial discretized nodes. In part because CTF is a low-resolution code, simulation run time is not computationally expensive, only on the order of minutes. Hi-resolution codes such as STAR-CCM+ can be used to train lower-fidelity codes such as CTF. Unlike STAR-CCM+, CTF has no turbulence model, only a two-phase turbulent mixing coefficient, β. β can be set to a constant value or calculated in terms of Reynolds number using an empirical correlation. Resultsmore » from STAR-CCM+ can be used to inform the appropriate value of β. Once β is calibrated, CTF runs can be an inexpensive alternative to costly STAR-CCM+ runs for scoping analyses. Based on the results of CTF runs, STAR-CCM+ can be run for specific parameters of interest. CASL areas of application are CIPS for single phase analysis and DNB-CTF for two-phase analysis.« less
Design of a Low Aspect Ratio Transonic Compressor Stage Using CFD Techniques
NASA Technical Reports Server (NTRS)
Sanger, Nelson L.
1994-01-01
A transonic compressor stage has been designed for the Naval Postgraduate School Turbopropulsion Laboratory. The design relied heavily on CFD techniques while minimizing conventional empirical design methods. The low aspect ratio (1.2) rotor has been designed for a specific head ratio of .25 and a tip relative inlet Mach number of 1.3. Overall stage pressure ratio is 1.56. The rotor was designed using an Euler code augmented by a distributed body force model to account for viscous effects. This provided a relatively quick-running design tool, and was used for both rotor and stator calculations. The initial stator sections were sized using a compressible, cascade panel code. In addition to being used as a case study for teaching purposes, the compressor stage will be used as a research stage. Detailed measurements, including non-intrusive LDV, will be compared with the design computations, and with the results of other CFD codes, as a means of assessing and improving the computational codes as design tools.
N-body simulations in modified Newtonian dynamics
NASA Astrophysics Data System (ADS)
Nipoti, C.; Londrillo, P.; Ciotti, L.
We describe some results obtained with N-MODY, a code for N-body simulations of collisionless stellar systems in modified Newtonian dynamics (MOND). We found that a few fundamental dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter. In particular, violent relaxation, phase mixing and galaxy merging take significantly longer in MOND than in Newtonian gravity, while dynamical friction is more effective in a MOND system than in an equivalent Newtonian system with dark matter.
2007-01-01
In this paper we studied the effects of external fields' polarization on the coupling of pure magnetic fields into human body. Finite Difference Time Domain (FDTD) method is used to calculate the current densities induced in a 1 cm resolution anatomically based model with proper tissue conductivities. Twenty different tissues have been considered in this investigation and scaled FDTD technique is used to convert the results of computer code run in 15 MHz to low frequencies which are encountered in the vicinity of industrial induction heating and melting devices. It has been found that external magnetic field's orientation due to human body has a pronounced impact on the level of induced currents in different body tissues. This may potentially help developing protecting strategies to mitigate the situations in which workers are exposed to high levels of external magnetic radiation. PMID:17504520
Differential Cross Section Kinematics for 3-dimensional Transport Codes
NASA Technical Reports Server (NTRS)
Norbury, John W.; Dick, Frank
2008-01-01
In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.
Disability Evaluation System Analysis and Research Annual Report 2015
2016-03-11
that of the military population as a whole; exceeding weight and body fat standards (i.e. overweight or obesity ) was the most common condition listed...prevalent conditions in the general military applicant population [8]. The most common conditions noted at the MEPS, were: overweight, obesity , and...ICD-9 Diagnosis Code n % of Cond 1 % of App 2 ICD-9 Diagnosis Code n % of Cond 1 % of App 2 Overweight, obesity and other
Navalta, James W; Tibana, Ramires Alsamir; Fedor, Elizabeth A; Vieira, Amilton; Prestes, Jonato
2014-01-01
This investigation assessed the lymphocyte subset response to three days of intermittent run exercise to exhaustion. Twelve healthy college-aged males (n = 8) and females (n = 4) (age = 26 ± 4 years; height = 170.2 ± 10 cm; body mass = 75 ± 18 kg) completed an exertion test (maximal running speed and VO2max) and later performed three consecutive days of an intermittent run protocol to exhaustion (30 sec at maximal running speed and 30 sec at half of the maximal running speed). Blood was collected before exercise (PRE) and immediately following the treadmill bout (POST) each day. When the absolute change from baseline was evaluated (i. e., Δ baseline), a significant change in CD4+ and CD8+ for CX3CR1 cells was observed by completion of the third day. Significant changes in both apoptosis and migration were observed following two consecutive days in CD19+ lymphocytes, and the influence of apoptosis persisted following the third day. Given these lymphocyte responses, it is recommended that a rest day be incorporated following two consecutive days of a high-intensity intermittent run program to minimize immune cell modulations and reduce potential susceptibility.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
Particle Number Dependence of the N-body Simulations of Moon Formation
NASA Astrophysics Data System (ADS)
Sasaki, Takanori; Hosono, Natsuki
2018-04-01
The formation of the Moon from the circumterrestrial disk has been investigated by using N-body simulations with the number N of particles limited from 104 to 105. We develop an N-body simulation code on multiple Pezy-SC processors and deploy Framework for Developing Particle Simulators to deal with large number of particles. We execute several high- and extra-high-resolution N-body simulations of lunar accretion from a circumterrestrial disk of debris generated by a giant impact on Earth. The number of particles is up to 107, in which 1 particle corresponds to a 10 km sized satellitesimal. We find that the spiral structures inside the Roche limit radius differ between low-resolution simulations (N ≤ 105) and high-resolution simulations (N ≥ 106). According to this difference, angular momentum fluxes, which determine the accretion timescale of the Moon also depend on the numerical resolution.
On the estimation and detection of the Rees-Sciama effect
NASA Astrophysics Data System (ADS)
Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.
2017-02-01
Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.
Mizuno, Sahiro; Arai, Mari; Todoko, Fumihiko; Yamada, Eri; Goto, Kazushige
2017-01-01
Purpose: To examine the effects of wearing a lower-body compression garment with different body coverage areas during prolonged running on exercise performance and muscle damage. Methods: Thirty male subjects were randomly assigned to one of three groups: (1) wearing a compression tights with 15 mmHg to thigh [n = 10, CT group], (2) wearing a compression socks with 15 mmHg to calf [n = 10, CS group], and (3) wearing a lower-body garment with < 5 mmHg to thigh and calf [n = 10, CON group]. The exercise consisted of 120 min of uphill running at 55% of V˙O2max. Heart rate (HR), rate of perceived exertion (RPE), and running economy (evaluated by VO2) were monitored during exercise every 10 min. Changes in maximum voluntary contraction (MVC) of knee extension and plantar flexion, height of counter movement jump (CMJ) and drop jump (DJ), and scores of subjective feelings of muscle soreness and fatigue were evaluated before exercise, and 60 and 180 min after exercise. Blood samples were collected to determine blood glucose, lactate, serum free fatty acid, myoglobin (Mb), high-sensitivity C-reactive protein, and plasma interleukin-6 concentrations before exercise (after 20 min of rest), at 60 min of exercise, immediately after exercise, and 60 and 180 min after exercise. Results: Changes in HR, RPE, and running economy during exercise did not differ significantly among the three groups. MVC of knee extension and plantar flexion, and DJ decreased significantly following exercise, with no difference among groups. The serum Mb concentration increased significantly with exercise in all groups, whereas the area under the curve for Mb concentration during 180 min post-exercise was significantly lower in the CT group (13,833 ± 1,397 pg/mL 180 min) than in the CON group (24,343 ± 3,370 pg/mL 180 min, P = 0.03). Conclusion: Wearing compression garment on the thigh significantly attenuated the increase in serum Mb concentration after exercise, suggesting that exercise-induced muscle damage was attenuated. PMID:29123488
nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes
NASA Astrophysics Data System (ADS)
Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain
2016-05-01
We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.
Analysis of Flexible Car Body of Straddle Monorail Vehicle
NASA Astrophysics Data System (ADS)
Zhong, Yuanmu
2018-03-01
Based on the finite element model of straddle monorail vehicle, a rigid-flexible coupling dynamic model considering vehicle body’s flexibility is established. The influence of vertical stiffness and vertical damping of the running wheel on the modal parameters of the car body is analyzed. The effect of flexible car body on modal parameters and vehicle ride quality is also studied. The results show that when the vertical stiffness of running wheel is less than 1 MN / m, the car body bounce and pitch frequency increase with the increasing of the vertical stiffness of the running wheel, when the running wheel vertical stiffness is 1MN / m or more, car body bounce and pitch frequency remained unchanged; When the vertical stiffness of the running wheel is below 1.8 MN / m, the vehicle body bounce and pitch damping ratio increase with the increasing of the vertical stiffness of the running wheel; When the running wheel vertical stiffness is 1.8MN / m or more, the car body bounce and pitch damping ratio remained unchanged; The running wheel vertical damping on the car body bounce and pitch frequency has no effect; Car body bounce and pitch damping ratio increase with the increasing of the vertical damping of the running wheel. The flexibility of the car body has no effect on the modal parameters of the car, which will improve the vehicle ride quality index.
Nuclear shell model code CRUNCHER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resler, D.A.; Grimes, S.M.
1988-05-01
A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.
Code C# for chaos analysis of relativistic many-body systems with reactions
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.
2012-04-01
In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
Modeling of Interactions of Ablated Plumes
2008-02-01
code was tested and verified using the Sedov-Taylor explosion problem 24. The grid 300 x 300 is used so as the single code run takes 30 minutes in a...still air and b) temperature contours along with the vector field for 20 km at t-10ps. 9 Final report AFOSR FA9550-07-1-0457 February 2008 0960014 09 C ow...ia Figue9FrainoIeodr shok wves a-)pesr otus0 )~22,bt4F,adetJ n d) het trnsferat te TPSw9PS 00 As 10 04 J. ’ Figure 9:Formation of secondary shock
Overspeed HIIT in Lower-Body Positive Pressure Treadmill Improves Running Performance.
Gojanovic, Boris; Shultz, Rebecca; Feihl, Francois; Matheson, Gordon
2015-12-01
Optimal high-intensity interval training (HIIT) regimens for running performance are unknown, although most protocols result in some benefit to key performance factors (running economy (RE), anaerobic threshold (AT), or maximal oxygen uptake (VO2max)). Lower-body positive pressure (LBPP) treadmills offer the unique possibility to partially unload runners and reach supramaximal speeds. We studied the use of LBPP to test an overspeed HIIT protocol in trained runners. Eleven trained runners (35 ± 8 yr, VO2max, 55.7 ± 6.4 mL·kg⁻¹·min⁻¹) were randomized to an LBPP (n = 6) or a regular treadmill (CON, n = 5), eight sessions over 4 wk of HIIT program. Four to five intervals were run at 100% of velocity at VO2max (vVO2max) during 60% of time to exhaustion at vVO2max (Tlim) with a 1:1 work:recovery ratio. Performance outcomes were 2-mile track time trial, VO2max, vVO2max, vAT, Tlim, and RE. LBPP sessions were carried out at 90% body weight. Group-time effects were present for vVO2max (CON, 17.5 vs. 18.3, P = 0.03; LBPP, 19.7 vs. 22.3 km·h⁻¹; P < 0.001) and Tlim (CON, 307.0 vs. 404.4 s, P = 0.28; LBPP, 444.5 vs. 855.5, P < 0.001). Simple main effects for time were present for field performance (CON, -18; LBPP, -25 s; P = 0.002), VO2max (CON, 57.6 vs. 59.6; LBPP, 54.1 vs. 55.1 mL·kg⁻¹·min⁻¹; P = 0.04) and submaximal HR (157.7 vs. 154.3 and 151.4 vs. 148.5 bpm; P = 0.002). RE was unchanged. A 4-wk HIIT protocol at 100% vVO2max improves field performance, vVO2max, VO2max and submaximal HR in trained runners. Improvements are similar if intervals are run on a regular treadmill or at higher speeds on a LPBB treadmill with 10% body weight reduction. LBPP could provide an alternative for taxing HIIT sessions.
Trends in New U.S. Marine Corps Accessions During the Recent Conflicts in Iraq and Afghanistan
2014-01-01
modest changes over the study period. Favorable trends included recent (2009-2010) improvernents in body mass index and physical activity levels...height, body mass index (BMI) in kg/m^ was calculated. Frequency of physical activity before service entry was assessed from self-report. Initial run...Test; BMI, body mass index; mph, miles per hour; SD, standard deviation. "Numbers (n) may not add up to 131,961 because of missing self-reported data for
Differences in physical fitness and throwing velocity among elite and amateur male handball players.
Gorostiaga, E M; Granados, C; Ibáñez, J; Izquierdo, M
2005-04-01
This study compared physical characteristics (body height, body mass [BM], body fat [BF], and free fatty mass [FFM]), one repetition maximum bench-press (1RM (BP)), jumping explosive strength (VJ), handball throwing velocity, power-load relationship of the leg and arm extensor muscles, 5- and 15-m sprint running time, and running endurance in two handball male teams: elite team, one of the world's leading teams (EM, n = 15) and amateur team, playing in the Spanish National Second Division (AM, n = 15). EM had similar values in body height, BF, VJ, 5- and 15-m sprint running time and running endurance than AM. However, the EM group gave higher values in BM (95.2 +/- 13 kg vs. 82.4 +/- 10 kg, p < 0.05), FFM (81.7 +/- 9 kg vs. 72.4 +/- 7 kg, p < 0.05), 1RM (BP) (107 +/- 12 kg vs. 83 +/- 10 kg, p < 0.001), muscle power during bench-press (18 - 21 %, p < 0.05) and half squat (13 - 17 %), and throwing velocities at standing (23.8 +/- 1.9 m . s (-1) vs. 21.8 +/- 1.6 m . s (-1), p < 0.05) and 3-step running (25.3 +/- 2.2 m . s (-1) vs. 22.9 +/- 1.4 m . s (-1), p < 0.05) actions than the AM group. Significant correlations (r = 0.67 - 0.71, p < 0.05 - 0.01) were observed in EM and AM between individual values of velocity at 30 % of 1RM (BP) and individual values of ball velocity during a standing throw. Significant correlations were observed in EM, but not in AM, between the individual values of velocity during 3-step running throw and the individual values of velocity at 30 % of 1RM (BP) (r = 0.72, p < 0.05), as well as the individual values of power at 100 % of body mass during half-squat actions (r = 0.62, p < 0.05). The present results suggest that more muscular and powerful players are at an advantage in handball. The differences observed in free fatty mass could partly explain the differences observed between groups in absolute maximal strength and muscle power. In EM, higher efficiency in handball throwing velocity may be associated with both upper and lower extremity power output capabilities, whereas in AM this relationship may be different. Endurance capacity does not seem to represent a limitation for elite performance in handball.
Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red
NASA Technical Reports Server (NTRS)
Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.
1997-01-01
As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Simplified Thermo-Chemical Modelling For Hypersonic Flow
NASA Astrophysics Data System (ADS)
Sancho, Jorge; Alvarez, Paula; Gonzalez, Ezequiel; Rodriguez, Manuel
2011-05-01
Hypersonic flows are connected with high temperatures, generally associated with strong shock waves that appear in such flows. At high temperatures vibrational degrees of freedom of the molecules may become excited, the molecules may dissociate into atoms, the molecules or free atoms may ionize, and molecular or ionic species, unimportant at lower temperatures, may be formed. In order to take into account these effects, a chemical model is needed, but this model should be simplified in order to be handled by a CFD code, but with a sufficient precision to take into account the physics more important. This work is related to a chemical non-equilibrium model validation, implemented into a commercial CFD code, in order to obtain the flow field around bodies in hypersonic flow. The selected non-equilibrium model is composed of seven species and six direct reactions together with their inverse. The commercial CFD code where the non- equilibrium model has been implemented is FLUENT. For the validation, the X38/Sphynx Mach 20 case is rebuilt on a reduced geometry, including the 1/3 Lref forebody. This case has been run in laminar regime, non catalytic wall and with radiative equilibrium wall temperature. The validated non-equilibrium model is applied to the EXPERT (European Experimental Re-entry Test-bed) vehicle at a specified trajectory point (Mach number 14). This case has been run also in laminar regime, non catalytic wall and with radiative equilibrium wall temperature.
Lower extremity joint kinetics and energetics during backward running.
DeVita, P; Stribling, J
1991-05-01
The purpose of this study was to measure lower extremity joint moments of force and joint muscle powers used to perform backward running. Ten trials of high speed (100 Hz) sagittal plane film records and ground reaction force data (1000 Hz) describing backward running were obtained from each of five male runners. Fifteen trials of forward running data were obtained from one of these subjects. Inverse dynamics were performed on these data to obtain the joint moments and powers, which were normalized to body mass to make between-subject comparisons. Backward running hip moment and power patterns were similar in magnitude and opposite in direction to forward running curves and produced more positive work in stance. Functional roles of knee and ankle muscles were interchanged between backward and forward running. Knee extensors were the primary source of propulsion in backward running owing to greater moment and power output (peak moment = 3.60 N.m.kg-1; peak power = 12.40 W.kg-1) compared with the ankle (peak moment = 1.92 N.m.kg-1; peak power = 7.05 W.kg-1). The ankle plantarflexors were the primary shock absorbers, producing the greatest negative power (peak = -6.77 W.kg-1) during early stance. Forward running had greater ankle moment and power output for propulsion and greater knee negative power for impact attenuation. The large knee moment in backward running supported previous findings indicating that backward running training leads to increased knee extensor torque capabilities.
Bhati, Pooja; Bansal, Vishal; Moiz, Jamal Ali
2017-08-24
Purpose The present study was conducted to compare the effects of low volume of high intensity interval training (LVHIIT) and high volume of high intensity interval training (HVHIIT) on heart rate variability (HRV) as a primary outcome measure, and on maximum oxygen consumption (VO2max), body composition, and lower limb muscle strength as secondary outcome measures, in sedentary young women. Methods Thirty-six participants were recruited in this study. The LVHIIT group (n = 17) performed one 4-min bout of treadmill running at 85%-95% maximum heart rate (HRmax), followed by 3 min of recovery by running at 70% HRmax, three times per week for 6 weeks. The HVHIIT group (n = 15) performed four times 4-min bouts of treadmill running at 85%-95% HRmax, interspersed with 3-min of recovery by running at 70% HRmax, 3 times per week for 6 weeks. All criterion measures were measured before and after training in both the groups. Results Due to attrition of four cases, data of 32 participants was used for analysis. A significant increase in high frequency (HF) power (p < 0.001) and decrease in the ratio of low frequency to high frequency power (LF/HF) ratio (p < 0.001) in HRV parameters, was observed post-HVHIIT, whereas, these variables did not change significantly (HF: p = 0.92, LF/HF ratio: p = 0.52) in LVHIIT group. Nevertheless, both the interventions proved equally effective in improving aerobic capacity (VO2max), body composition, and muscle strength. Conclusion The study results suggest that both LVHIIT and HVHIIT are equally effective in improving VO2max, body composition, and muscle strength, in sedentary young women. However, HVHIIT induces parasympathetic dominance as well, as measured by HRV.
Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehe, Remi; Kirchen, Manuel; Jalas, Soeren
The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less
Installed Transonic 2D Nozzle Nacelle Boattail Drag Study
NASA Technical Reports Server (NTRS)
Malone, Michael B.; Peavey, Charles C.
1999-01-01
The Transonic Nozzle Boattail Drag Study was initiated in 1995 to develop an understanding of how external nozzle transonic aerodynamics effect airplane performance and how strongly those effects are dependent on nozzle configuration (2D vs. axisymmetric). MDC analyzed the axisymmetric nozzle. Boeing subcontracted Northrop-Grumman to analyze the 2D nozzle. AU participants analyzed the AGARD nozzle as a check-out and validation case. Once the codes were checked out and the gridding resolution necessary for modeling the separated flow in this region determined, the analysis moved to the installed wing/body/nacelle/diverter cases. The boat tail drag validation case was the AGARD B.4 rectangular nozzle. This test case offered both test data and previous CFD analyses for comparison. Results were obtained for test cases B.4.1 (M=0.6) and B.4.2 (M=0.938) and compared very well with the experimental data. Once the validation was complete a CFD grid was constructed for the full Ref. H configuration (wing/body/nacelle/diverter) using a combination of patched and overlapped (Chimera) grids. This was done to ensure that the grid topologies and density would be adequate for the full model. The use of overlapped grids allowed the same grids from the full configuration model to be used for the wing/body alone cases, thus eliminating the risk of grid differences affecting the determination of the installation effects. Once the full configuration model was run and deemed to be suitable the nacelle/diverter grids were removed and the wing/body analysis performed. Reference H wing/body results were completed for M=0.9 (a=0.0, 2.0, 4.0, 6.0 and 8.0), M=1.1 (a=4.0 and 6.0) and M=2.4 (a=0.0, 2.0, 4.4, 6.0 and 8.0). Comparisons of the M=0.9 and M=2.4 cases were made with available wind tunnel data and overall comparisons were good. The axi-inlet/2D nozzle nacelle was analyzed isolated. The isolated nacelle data coupled with the wing/body result enabled the interference effects of the installed nacelles to be determined. Isolated nacelle mm were made at M=0.9 and M=1.1 for both the supersonic and transonic nozzle settings. AU of the isolated nacelle cases were run at alpha=0. Full configuration runs were to be made at Mach numbers of 0.9, 1.1, and 2.4 (the same as the wing/body and isolated nacelles). Both the isolated nacelles and installed nacelles were run with inlet conditions designed to give zero spillage. This was to be done in order to isolate the boattail effects as much as possible. Full configuration runs with the supersonic nozzles were completed for M=0.9 and 1.1 at a=4.0 and 6.0 (4 runs total) and with the transonic nozzles at M=0.9 and 1.1 at a=2.0, 4.0 and 6.0 (6 runs total). Drag breakdowns were completed for the M=0.9 and M= 1.1 showing favorable interference drag for both cases.
Aerodynamic Performance Predictions of a SA- 2 Missile Using Missile DATCOM
2009-09-01
transformation that is given by Eqs. (4) and (5). Eqs. (8)–(10) show the formulation in the body and wind axis terminology. 2,0D AC C kC L (8) 10 cos...by Teo (2008) using Missile LAB code. However, the missile geometry then was set up from a rudimentary drawing and not one that represented a high...provided by MSIC. These particular cases were run forcing turbulent flow with a surface roughness of 0.001016 cm, which was found by Teo (2008) to
NASA Astrophysics Data System (ADS)
Samsing, Johan; Askar, Abbas; Giersz, Mirek
2018-03-01
We estimate the population of eccentric gravitational wave (GW) binary black hole (BBH) mergers forming during binary–single interactions in globular clusters (GCs), using ∼800 GC models that were evolved using the MOCCA code for star cluster simulations as part of the MOCCA-Survey Database I project. By re-simulating BH binary–single interactions extracted from this set of GC models using an N-body code that includes GW emission at the 2.5 post-Newtonian level, we find that ∼10% of all the BBHs assembled in our GC models that merge at present time form during chaotic binary–single interactions, and that about half of this sample have an eccentricity >0.1 at 10 Hz. We explicitly show that this derived rate of eccentric mergers is ∼100 times higher than one would find with a purely Newtonian N-body code. Furthermore, we demonstrate that the eccentric fraction can be accurately estimated using a simple analytical formalism when the interacting BHs are of similar mass, a result that serves as the first successful analytical description of eccentric GW mergers forming during three-body interactions in realistic GCs.
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
High-Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2010-01-01
It has been known for some time that Taylor series (TS) integration is among the most efficient and accurate numerical methods in solving differential equations. However, the full benefit of the method has yet to be realized in calculating spacecraft trajectories, for two main reasons. First, most applications of Taylor series to trajectory propagation have focused on relatively simple problems of orbital motion or on specific problems and have not provided general applicability. Second, applications that have been more general have required use of a preprocessor, which inevitably imposes constraints on computational efficiency. The latter approach includes the work of Berryman et al., who solved the planetary n-body problem with relativistic effects. Their work specifically noted the computational inefficiencies arising from use of a preprocessor and pointed out the potential benefit of manually coding derivative routines. In this Engineering Note, we report on a systematic effort to directly implement Taylor series integration in an operational trajectory propagation code: the Spacecraft N-Body Analysis Program (SNAP). The present Taylor series implementation is unique in that it applies to spacecraft virtually anywhere in the solar system and can be used interchangeably with another integration method. SNAP is a high-fidelity trajectory propagator that includes force models for central body gravitation with N X N harmonics, other body gravitation with N X N harmonics, solar radiation pressure, atmospheric drag (for Earth orbits), and spacecraft thrusting (including shadowing). The governing equations are solved using an eighth-order Runge-Kutta Fehlberg (RKF) single-step method with variable step size control. In the present effort, TS is implemented by way of highly integrated subroutines that can be used interchangeably with RKF. This makes it possible to turn TS on or off during various phases of a mission. Current TS force models include central body gravitation with the J2 spherical harmonic, other body gravitation, thrust, constant atmospheric drag from Earth's atmosphere, and solar radiation pressure for a sphere under constant illumination. The purpose of this Engineering Note is to demonstrate the performance of TS integration in an operational trajectory analysis code and to compare it with a standard method, eighth-order RKF. Results show that TS is 16.6 times faster on average and is more accurate in 87.5% of the cases presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, S.Y.
2001-02-02
The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less
A Fast Code for Jupiter Atmospheric Entry
NASA Technical Reports Server (NTRS)
Tauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq; Arnold, James (Technical Monitor)
1998-01-01
A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry; the calculation required 3.5 sec of CPU time on a work station. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 12.5% too high. The forebody's mass loss was overpredicted by 5.5% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was considered satisfactory, especially in view of the code's fast running time and the methods' approximations.
Tanda, Giovanni; Knechtle, Beat
2013-01-01
The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners. Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126) participating in the Basel marathon in Switzerland between 2010 and 2011. Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices. Marathon race time may be predicted (r = 0.81) for recreational male runners by the following equation: marathon race time (minutes) = 11.03 + 98.46 exp(-0.0053 mean weekly training distance [km/week]) + 0.387 mean training pace (sec/km) + 0.1 exp(0.23 body fat percentage [%]). The marathon race time results were valid over a range of 165-266 minutes.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
Metabolic, Cardiopulmonary, and Gait Profiles of Recently Injured and Noninjured Runners
Peng, Lucinda; Seay, Amanda N.; Montero, Cindy; Barnes, Leslie L.; Vincent, Kevin R.; Conrad, Bryan P.; Chen, Cong; Vincent, Heather K.
2017-01-01
Objective To examine whether runners recovering from a lower body musculoskeletal injury have different metabolic, cardiopulmonary, and gait responses compared with healthy runners. Design Cross-sectional study. Setting Research laboratory at an academic institution. Methods Healthy runners (n = 50) were compared with runners who were recently injured but had returned to running (n = 50). Both groups were participating in similar cross-training modalities such as swimming, weight training, biking, and yoga. Running gait was analyzed on a treadmill using 3-dimensional motion capture, and metabolic and cardiopulmonary measures were captured simultaneously with a portable metabolic analyzer. Main Outcome Measures Rate of oxygen consumption, heart rate, ventilation, carbohydrate and fat oxidation values, gait temporospatial parameters and range of motion measures (ROM) in the sagittal plane, energy expenditure, and vertical displacement of the body’s center of gravity (COG). Results The self-selected running speed was different between the injured and healthy runners (9.7 ± 1.1 km/h and 10.6 ± 1.1 km/h, respectively; P = .038). No significant group differences were noted in any metabolic or cardiopulmonary variable while running at the self-selected or standard speed (13.6 km/h). The vertical displacement of the COG was less in the injured group (8.4 ± 1.4 cm and 8.9 ± 1.4, respectively; P = .044). ROM about the right ankle in the sagittal plane at the self-selected running speed during the gait cycle was less in the injured runners compared with the healthy runners (P < .05). Conclusions Runners with a recent lower body injury who have returned to running have similar cardiopulmonary and metabolic responses to running as healthy runners at the self-selected and standard speeds; this finding may be due in part to participation in cross-training modes that preserve cardiopulmonary and metabolic adaptations. Injured runners may conserve motion by minimizing COG displacement and ankle joint ROM during a gait cycle. PMID:24998402
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
NASA Astrophysics Data System (ADS)
Vandenbroucke, B.; Wood, K.
2018-04-01
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
NASA Astrophysics Data System (ADS)
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-05-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.
Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent
2017-04-01
No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.
Missile Aerodynamics (Aerodynamique des Missiles)
1998-11-01
Magnus effect. effects on a spinning finned cylindrical body. Despite the large As noted above, the source, magnitude and even the direction amount of...axis, and to circular- cylindrical bodies in combination with determine directly the pressures acting on the body. triangular, rectangular, or...pressure drop in smooth cylindrical codes, as well as for testing and checking CFD-based tubes", NACA ARR L4C16, 1944. results. 6. Nielsen, J. N. and
Studies of Planet Formation using a Hybrid N-body + Planetesimal Code
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.; Bromley, Benjamin C.; Salamon, Michael (Technical Monitor)
2005-01-01
The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: 1) icy planets - models for icy planet formation will demonstrate how the physical properties of debris disks, including the Kuiper Belt in our solar system, depend on initial conditions and input physics; and 2) terrestrial planets - calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment. During the past year, we made progress on each issue. Papers published in 2004 are summarized. Summaries of work to be completed during the first half of 2005 and work planned for the second half of 2005 are included.
Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations
NASA Astrophysics Data System (ADS)
O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.
2005-09-01
We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in the same quantities for gas of lower overdensity. For example, at z=3 the fraction of cosmic gas that has temperature logT>0.5 is ~80% for both Enzo ZEUS and GADGET, while it is 40%-60% for Enzo PPM. We argue that these discrepancies are due to differences in the shock-capturing abilities of the different methods. In particular, we find that the ZEUS implementation of artificial viscosity in Enzo leads to some unphysical heating at early times in preshock regions. While this is apparently a significantly weaker effect in GADGET, its use of an artificial viscosity technique may also make it prone to some excess generation of entropy that should be absent in Enzo PPM. Overall, the hydrodynamical results for GADGET are bracketed by those for Enzo ZEUS and Enzo PPM but are closer to Enzo ZEUS.
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
Aftermath of early Hit-and-Run collisions in the Inner Solar System
NASA Astrophysics Data System (ADS)
Sarid, Gal; Stewart, Sarah T.; Leinhardt, zoe M.
2015-08-01
Planet formation epoch, in the terrestrial planet region and the asteroid belt, was characterized by a vigorous dynamical environment that was conducive to giant impacts among planetary embryos and asteroidal parent bodies, leading to diverse outcomes. Among these the greatest potential for producing diverse end-members lies is the erosive Hit-and-Run regime (small mass ratios, off-axis oblique impacts and non-negligible ejected mass), which is also more probable in terms of the early dynamical encounter configuration in the inner solar system. This collision regime has been invoked to explain outstanding issues, such as planetary volatile loss records, origin of the Moon and mantle stripping from Mercury and some of the larger asteroids (Vesta, Psyche).We performed and analyzed a set of simulations of Hit-and-Run events, covering a large range of mass ratios (1-20), impact parameters (0.25-0.96, for near head-on to barely grazing) and impact velocities (~1.5-5 times the mutual escape velocity, as dependent on the mass ratio). We used an SPH code with tabulated EOS and a nominal simlated time >1 day, to track the collisional shock processing and the provenance of material components. of collision debris. Prior to impact runs, all bodies were allowed to initially settle to negligible particle velocities in isolation, within ~20 simulated hrs. The total number of particles involved in each of our collision simulations was between (1-3 x 105). Resulting configurations include stripped mantles, melting/vaporization of rock and/or iron cores and strong variations of asteroid parent bodies fromcanonical chondritic composition.In the context of large planetary formation simulations, velocity and impact angle distributions are necessary to asses impact probabilities. The mass distribution and interaction within planetary embryo and asteroid swarms depends both on gravitational dynamics and the applied fragmentation mechanism. We will present results pertaining to general projectile remnant scaling relations, constitution of ejected unbound material and the composition of variedcollision remnants, which become available to seed the asteroid belt.
Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, M
2006-12-12
ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less
Feasibility Study of Alternative Fabrication Methods.
1979-08-01
must comply with the I requirements of the latest edition of the National Electrical Code. The Body and Liner Assembly System will comply with I the...latest edition of the National Electrical Code per AMCR 385 (Army Material Command Safety Manual). Also, OSHA’s 1 Occupational Safety and Health...the top of the elevator. On the top and at the rear of * A-4 50 UD( AISA 6 --7 G OVe CASE THOMS ’ON J .7O37IA (PLACES) _ _ -- 3. BAL - 1N-TOS10 12N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Campbell, John M.; Ellis, R. Keith
We present the implementation of several color-singlet final-state processes at Next-to-Next-to Leading Order (NNLO) accuracy in QCD to the publicly available parton-level Monte Carlo program MCFM. Specifically we discuss the processesmore » $$pp\\rightarrow H$$, $$pp\\rightarrow Z$$, $$pp\\rightarrow W$$, $$pp\\rightarrow HZ$$, $$pp\\rightarrow HW$$ and $$pp\\rightarrow\\gamma\\gamma$$. Decays of the unstable bosons are fully included, resulting in a flexible fully differential Monte Carlo code. The NNLO corrections have been calculated using the non-local $N$-jettiness subtraction approach. Special attention is given to the numerical aspects of running MCFM for these processes at this order. Here, we pay particular attention to the systematic uncertainties due to the power corrections induced by the $N$-jettiness regularization scheme and the evaluation time needed to run the hybrid openMP/MPI version of MCFM at NNLO on multi-processor systems.« less
Color-singlet production at NNLO in MCFM
Boughezal, Radja; Campbell, John M.; Ellis, R. Keith; ...
2016-12-30
We present the implementation of several color-singlet final-state processes at Next-to-Next-to Leading Order (NNLO) accuracy in QCD to the publicly available parton-level Monte Carlo program MCFM. Specifically we discuss the processesmore » $$pp\\rightarrow H$$, $$pp\\rightarrow Z$$, $$pp\\rightarrow W$$, $$pp\\rightarrow HZ$$, $$pp\\rightarrow HW$$ and $$pp\\rightarrow\\gamma\\gamma$$. Decays of the unstable bosons are fully included, resulting in a flexible fully differential Monte Carlo code. The NNLO corrections have been calculated using the non-local $N$-jettiness subtraction approach. Special attention is given to the numerical aspects of running MCFM for these processes at this order. Here, we pay particular attention to the systematic uncertainties due to the power corrections induced by the $N$-jettiness regularization scheme and the evaluation time needed to run the hybrid openMP/MPI version of MCFM at NNLO on multi-processor systems.« less
NASA Astrophysics Data System (ADS)
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
HRB-22 preirradiation thermal analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, R.; Sawa, K.
1995-05-01
This report describes the preirradiation thermal analysis of the HRB-22 capsule designed for irradiation in the removable beryllium (RB) position of the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). CACA-2 a heavy isotope and fission product concentration calculational code for experimental irradiation capsules was used to determine time dependent fission power for the fuel compacts. The Heat Engineering and Transfer in Nine Geometries (HEATING) computer code, version 7.2, was used to solve the steady-state heat conduction problem. The diameters of the graphite fuel body that contains the compacts and the primary pressure vessel were selected suchmore » that the requirements of running the compacts at an average temperature of < 1,250 C and not exceeding a maximum fuel temperature of 1,350 C was met throughout the four cycles of irradiation.« less
Giorgos, Paradisis; Elias, Zacharogiannis
2007-01-01
The aim of this study was to investigate the effect of 6 wk of whole body vibration (WBV) training on sprint running kinematics and explosive strength performance. Twenty-four volunteers (12 women and 12 men) participated in the study and were randomised (n = 12) into the experimental and control groups. The WBV group performed a 6-wk program (16-30 min·d-1, 3 times a week) on a vibration platform. The amplitude of the vibration platform was 2.5 mm and the acceleration was 2.28 g. The control group did not participate in any training. Tests were performed Pre and post the training period. Sprint running performance was measured during a 60 m sprint where running time, running speed, step length and step rate were calculated. Explosive strength performance was measured during a counter movement jump (CMJ) test, where jump height and total number of jumps performed in a period of 30 s (30CVJT). Performance in 10 m, 20 m, 40 m, 50 m and 60 m improved significantly after 6 wk of WBV training with an overall improvement of 2.7%. The step length and running speed improved by 5.1% and 3.6%, and the step rate decreased by 3.4%. The countermovement jump height increased by 3.3%, and the explosive strength endurance improved overall by 7.8%. The WBV training period of 6 wk produced significant changes in sprint running kinematics and explosive strength performance. Key pointsWBV training.Sprint running kinematics.Explosive strength performance PMID:24149223
Knoepfli-Lenzin, C; Sennhauser, C; Toigo, M; Boutellier, U; Bangsbo, J; Krustrup, P; Junge, A; Dvorak, J
2010-04-01
The present study examined the effect of football (F, n=15) training on the health profile of habitually active 25-45-year-old men with mild hypertension and compared it with running (R, n=15) training and no additional activity (controls, C, n=17). The participants in F and R completed a 1-h training session 2.4 times/week for 12 weeks. Systolic and diastolic blood pressure decreased in all groups but the decrease in diastolic blood pressure in F (-9 +/- 5 (+/- SD) mmHg) was higher than that in C (-4 +/- 6 mmHg). F was as effective as R in decreasing body mass (-1.6 +/- 1.8 vs-1.5 +/- 2.1 kg) and total fat mass (-2.0 +/- 1.5 vs -1.6 +/- 1.5 kg) and in increasing supine heart rate variability, whereas no changes were detected for C. Maximal stroke volume improved in F (+13.1%) as well as in R (+10.1%) compared with C (-4.9%). Total cholesterol decreased in F (5.8 +/- 1.2 to 5.5 +/- 0.9 mmol/L) but was not altered in R and C. We conclude that football training, consisting of high-intensity intermittent exercise, results in positive effects on blood pressure, body composition, stroke volume and supine heart rate variability, and elicits at least the same cardiovascular health benefits as continuous running exercise in habitually active men with mild hypertension.
Are running speeds maximized with simple-spring stance mechanics?
Clark, Kenneth P; Weyand, Peter G
2014-09-15
Are the fastest running speeds achieved using the simple-spring stance mechanics predicted by the classic spring-mass model? We hypothesized that a passive, linear-spring model would not account for the running mechanics that maximize ground force application and speed. We tested this hypothesis by comparing patterns of ground force application across athletic specialization (competitive sprinters vs. athlete nonsprinters, n = 7 each) and running speed (top speeds vs. slower ones). Vertical ground reaction forces at 5.0 and 7.0 m/s, and individual top speeds (n = 797 total footfalls) were acquired while subjects ran on a custom, high-speed force treadmill. The goodness of fit between measured vertical force vs. time waveform patterns and the patterns predicted by the spring-mass model were assessed using the R(2) statistic (where an R(2) of 1.00 = perfect fit). As hypothesized, the force application patterns of the competitive sprinters deviated significantly more from the simple-spring pattern than those of the athlete, nonsprinters across the three test speeds (R(2) <0.85 vs. R(2) ≥ 0.91, respectively), and deviated most at top speed (R(2) = 0.78 ± 0.02). Sprinters attained faster top speeds than nonsprinters (10.4 ± 0.3 vs. 8.7 ± 0.3 m/s) by applying greater vertical forces during the first half (2.65 ± 0.05 vs. 2.21 ± 0.05 body wt), but not the second half (1.71 ± 0.04 vs. 1.73 ± 0.04 body wt) of the stance phase. We conclude that a passive, simple-spring model has limited application to sprint running performance because the swiftest runners use an asymmetrical pattern of force application to maximize ground reaction forces and attain faster speeds. Copyright © 2014 the American Physiological Society.
The Scylla Multi-Code Comparison Project
NASA Astrophysics Data System (ADS)
Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team
2016-01-01
Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.
MAGI: many-component galaxy initializer
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2018-04-01
Providing initial conditions is an essential procedure for numerical simulations of galaxies. The initial conditions for idealized individual galaxies in N-body simulations should resemble observed galaxies and be dynamically stable for time-scales much longer than their characteristic dynamical times. However, generating a galaxy model ab initio as a system in dynamical equilibrium is a difficult task, since a galaxy contains several components, including a bulge, disc, and halo. Moreover, it is desirable that the initial-condition generator be fast and easy to use. We have now developed an initial-condition generator for galactic N-body simulations that satisfies these requirements. The developed generator adopts a distribution-function-based method, and it supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc. We tested the dynamical stability of systems generated by our code, representing early- and late-type galaxies, with N = 2097 152 and 8388 608 particles, respectively, and we found that the model galaxies maintain their initial distributions for at least 1 Gyr. The execution times required to generate the two models were 8.5 and 221.7 seconds, respectively, which is negligible compared to typical execution times for N-body simulations. The code is provided as open-source software and is publicly and freely available at https://bitbucket.org/ymiki/magi.
Relationship Between Body Fat and Physical Fitness in Army ROTC Cadets.
Steed, Carly L; Krull, Benjamin R; Morgan, Amy L; Tucker, Robin M; Ludy, Mary-Jon
2016-09-01
The Army Physical Fitness Test (APFT), including timed push-ups, sit-ups, and run, assesses physical performance for the Army. Percent body fat is estimated using height and circumference measurements. The objectives of the study were to (a) compare the accuracy of height and circumference measurements to other, more accepted, body fat assessment methods and (b) determine the relationships between body composition and APFT results. Participants included Reserve Officer Training Corps (ROTC) cadets (n = 11 males, 2 females, 21.6 ± 3.5 years) from a midwestern university). At one visit, percent body fat was assessed using height and circumference measurements, air-displacement plethysmography, and bioelectrical impedance analysis. APFT results were provided by the ROTC director. All assessment methods for percent body fat were strongly associated (r ≥ 0.7, p < 0.01), implying that height and circumference measurement is a practical tool to estimate percent body fat of ROTC cadets. Total APFT score was not associated with any body fat assessment method. Push-up number was negatively associated with percent body fat by all assessment methods (r ≥ -0.8, p = 0.001), although run time was positively associated (r ≥ 0.6, p < 0.05). This suggests that percent body fat may be an important variable in determining or improving cardiovascular and muscular endurance, but not APFT performance. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.
RunJumpCode: An Educational Game for Educating Programming
ERIC Educational Resources Information Center
Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon
2017-01-01
Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Gongbo; Koyama, Kazuya; Li Baojiu
2011-02-15
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discussmore » how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.« less
Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas
2017-04-01
We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.
Visuospatial memory computations during whole-body rotations in roll.
Van Pelt, S; Van Gisbergen, J A M; Medendorp, W P
2005-08-01
We used a memory-saccade task to test whether the location of a target, briefly presented before a whole-body rotation in roll, is stored in egocentric or in allocentric coordinates. To make this distinction, we exploited the fact that subjects, when tilted sideways in darkness, make systematic errors when indicating the direction of gravity (an allocentric task) even though they have a veridical percept of their self-orientation in space. We hypothesized that if spatial memory is coded allocentrically, these distortions affect the coding of remembered targets and their readout after a body rotation. Alternatively, if coding is egocentric, updating for body rotation becomes essential and errors in performance should be related to the amount of intervening rotation. Subjects (n = 6) were tested making saccades to remembered world-fixed targets after passive body tilts. Initial and final tilt angle ranged between -120 degrees CCW and 120 degrees CW. The results showed that subjects made large systematic directional errors in their saccades (up to 90 degrees ). These errors did not occur in the absence of intervening body rotation, ruling out a memory degradation effect. Regression analysis showed that the errors were closely related to the amount of subjective allocentric distortion at both the initial and final tilt angle, rather than to the amount of intervening rotation. We conclude that the brain uses an allocentric reference frame, possibly gravity-based, to code visuospatial memories during whole-body tilts. This supports the notion that the brain can define information in multiple frames of reference, depending on sensory inputs and task demands.
MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science
NASA Astrophysics Data System (ADS)
Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke
2011-12-01
We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
A step towards understanding the mechanisms of running-related injuries.
Malisoux, Laurent; Nielsen, Rasmus Oestergaard; Urhausen, Axel; Theisen, Daniel
2015-09-01
To investigate the association between training-related characteristics and running-related injury using a new conceptual model for running-related injury generation, focusing on the synergy between training load and previous injuries, short-term running experience or body mass index (> or < 25 kg m(-2)). Prospective cohort study with a 9-month follow-up. The data of two previous studies using the same methodology were revisited. Recreational runners (n = 517) reported information about running training characteristics (weekly distance, frequency, speed), other sport participation and injuries on a dedicated internet platform. Weekly volume (dichotomized into < 2h and ≥ 2 h) and session frequency (dichotomized into < 2 and ≥ 2) were the main exposures because they were considered necessary causes for running-related injury. Non-training-related characteristics were included in Cox regression analyses as effect-measure modifiers. Hazard ratio was the measure of association. The size of effect-measure modification was calculated as the relative excess risk due to interaction. One hundred sixty-seven runners reported a running-related injury. Crude analyses revealed that weekly volume < 2h (hazard ratio = 3.29; 95% confidence intervals = 2.27; 4.79) and weekly session frequency < 2 (hazard ratio = 2.41; 95% confidence intervals = 1.71; 3.42) were associated with increased injury rate. Previous injury was identified as an effect-measure modifier on weekly volume (relative excess risk due to interaction = 4.69; 95% confidence intervals = 1.42; 7.95; p = 0.005) and session frequency (relative excess risk due to interaction = 2.44; 95% confidence intervals = 0.48; 4.39; p = 0.015). A negative synergy was found between body mass index and weekly volume (relative excess risk due to interaction = -2.88; 95% confidence intervals = -5.10; -0.66; p = 0.018). The effect of a runner's training load on running-related injury is influenced by body mass index and previous injury. These results show the importance to distinguish between confounding and effect-measure modification in running-related injury research. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
McMullan, Rachel C; Kelly, Scott A; Hua, Kunjie; Buckley, Brian K; Faber, James E; Pardo-Manuel de Villena, Fernando; Pomp, Daniel
2016-11-01
Aging is associated with declining exercise and unhealthy changes in body composition. Exercise ameliorates certain adverse age-related physiological changes and protects against many chronic diseases. Despite these benefits, willingness to exercise and physiological responses to exercise vary widely, and long-term exercise and its benefits are difficult and costly to measure in humans. Furthermore, physiological effects of aging in humans are confounded with changes in lifestyle and environment. We used C57BL/6J mice to examine long-term patterns of exercise during aging and its physiological effects in a well-controlled environment. One-year-old male (n = 30) and female (n = 30) mice were divided into equal size cohorts and aged for an additional year. One cohort was given access to voluntary running wheels while another was denied exercise other than home cage movement. Body mass, composition, and metabolic traits were measured before, throughout, and after 1 year of treatment. Long-term exercise significantly prevented gains in body mass and body fat, while preventing loss of lean mass. We observed sex-dependent differences in body mass and composition trajectories during aging. Wheel running (distance, speed, duration) was greater in females than males and declined with age. We conclude that long-term exercise may serve as a preventive measure against age-related weight gain and body composition changes, and that mouse inbred strains can be used to characterize effects of long-term exercise and factors (e.g. sex, age) modulating these effects. These findings will facilitate studies on relationships between exercise and health in aging populations, including genetic predisposition and genotype-by-environment interactions. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Run Economy on a Normal and Lower Body Positive Pressure Treadmill.
Temple, Corey; Lind, Erik; VAN Langen, Deborah; True, Larissa; Hupman, Saige; Hokanson, James F
2017-01-01
Lower body positive pressure (LBPP) treadmill running is used more frequently in clinical and athletic settings. Accurate caloric expenditure is required for proper exercise prescription, especially for obese patients performing LBPP exercise. It is unclear if running on LBPP changes running economy (RE) in proportion to the changes in body weight. The purpose of the study was to measure the oxygen consumption (VO 2 ) and running economy (RE) of treadmill running at normal body weight and on LBPP. Twenty-three active, non-obese participants (25.8±7.2 years; BMI = 25.52±3.29 kg·m -2 ) completed two bouts of running exercise in a counterbalanced manner: (a) on a normal treadmill (NT) and (b) on a LBPP treadmill at 60% (40% of body weight supported) for 4 min at 2.24 (5 mph), 2.68 (6 mph), and 3.13 m·s -1 (7 mph). Repeated measures ANOVA showed a statistically significant interaction in RE among trials, F(2, 44) = 6.510, p <.0005, partial η 2 = 0.228. An examination of pairwise comparisons indicated that RE was significantly greater for LBPP across the three speeds ( p < 0.005). As expected, LBPP treadmill running resulted in significantly lower oxygen consumption at all three running speeds. We conclude that RE (ml O 2 ·kg -1 ·km -1 ) of LBPP running is significantly poorer than normal treadmill running, and the ~30% change in absolute energy cost is not as great as predicted by the change in body weight (40%).
Planet formation: is it good or bad to have a stellar companion?
NASA Astrophysics Data System (ADS)
Marzari, F.; Thebault, P.; Scholl, H.
2010-04-01
Planet formation in binary star systems is a complex issue due to the gravitational perturbations of the companion star. One of the crucial steps of the core-accretion model is planetesimal accretion into large protoplanets which finally coalesce into planets. In a planetesimal swarm surrounding the primary star, the average mutual impact velocity determines if larger bodies form or if the population is grinded down to dust, halting the planet formation process. This velocity is strongly influenced by the companion gravitational pull and by gas drag. The combined effect of these two forces may act in favour of or against planet formation, setting a lower or equal probability of the existence of extrasolar planets around single or binary stars. Planetesimal accretion in binaries has been studied so far with two different approaches. N-body codes based on the assumption that the disk is axisymmetric are very cost-effective since they allow the study of the mutual relative velocity with limited CPU usage. A large amount of planetesimal trajectories can be computed making it possible to outline the regions around the star where planet formation is possible. The main limitation of the N-body codes is the axisymmetric assumption. The companion perturbations affect not only the planetesimal orbits, but also the gaseous disk, by forcing spiral density waves. In addition, the overall shape of the disk changes from circular to elliptic. Hybrid codes have been recently developed which solve the equations for the disk with a hydrodynamical grid code and use the computed gas density and velocity vector to calculate an accurate value of the gas drag force on the planetesimals. These codes are more complex and may compute the trajectories of only a limited number of planetesimals.
Changes in foot and shank coupling due to alterations in foot strike pattern during running.
Pohl, Michael B; Buckley, John G
2008-03-01
Determining if and how the kinematic relationship between adjacent body segments changes when an individual's gait pattern is experimentally manipulated can yield insight into the robustness of the kinematic coupling across the associated joint(s). The aim of this study was to assess the effects on the kinematic coupling between the forefoot, rearfoot and shank during ground contact of running with alteration in foot strike pattern. Twelve subjects ran over-ground using three different foot strike patterns (heel strike, forefoot strike, toe running). Kinematic data were collected of the forefoot, rearfoot and shank, which were modelled as rigid segments. Coupling at the ankle-complex and midfoot joints was assessed using cross-correlation and vector coding techniques. In general good coupling was found between rearfoot frontal plane motion and transverse plane shank rotation regardless of foot strike pattern. Forefoot motion was also strongly coupled with rearfoot frontal plane motion. Subtle differences were noted in the amount of rearfoot eversion transferred into shank internal rotation in the first 10-15% of stance during heel strike running compared to forefoot and toe running, and this was accompanied by small alterations in forefoot kinematics. These findings indicate that during ground contact in running there is strong coupling between the rearfoot and shank via the action of the joints in the ankle-complex. In addition, there was good coupling of both sagittal and transverse plane forefoot with rearfoot frontal plane motion via the action of the midfoot joints.
Aerodynamics of wing-assisted incline running in birds.
Tobalske, Bret W; Dial, Kenneth P
2007-05-01
Wing-assisted incline running (WAIR) is a form of locomotion in which a bird flaps its wings to aid its hindlimbs in climbing a slope. WAIR is used for escape in ground birds, and the ontogeny of this behavior in precocial birds has been suggested to represent a model analogous to transitional adaptive states during the evolution of powered avian flight. To begin to reveal the aerodynamics of flap-running, we used digital particle image velocimetry (DPIV) and measured air velocity, vorticity, circulation and added mass in the wake of chukar partridge Alectoris chukar as they engaged in WAIR (incline 65-85 degrees; N=7 birds) and ascending flight (85 degrees, N=2). To estimate lift and impulse, we coupled our DPIV data with three-dimensional wing kinematics from a companion study. The ontogeny of lift production was evaluated using three age classes: baby birds incapable of flight [6-8 days post hatching (d.p.h.)] and volant juveniles (25-28 days) and adults (45+ days). All three age classes of birds, including baby birds with partially emerged, symmetrical wing feathers, generated circulation with their wings and exhibited a wake structure that consisted of discrete vortex rings shed once per downstroke. Impulse of the vortex rings during WAIR was directed 45+/-5 degrees relative to horizontal and 21+/-4 degrees relative to the substrate. Absolute values of circulation in vortex cores and induced velocity increased with increasing age. Normalized circulation was similar among all ages in WAIR but 67% greater in adults during flight compared with flap-running. Estimated lift during WAIR was 6.6% of body weight in babies and between 63 and 86% of body weight in juveniles and adults. During flight, average lift was 110% of body weight. Our results reveal for the first time that lift from the wings, rather than wing inertia or profile drag, is primarily responsible for accelerating the body toward the substrate during WAIR, and that partially developed wings, not yet capable of flight, can produce useful lift during WAIR. We predict that neuromuscular control or power output, rather than external wing morphology, constrain the onset of flight ability during development in birds.
The impact of badminton on health markers in untrained females.
Patterson, Stephen; Pattison, John; Legg, Hayley; Gibson, Ann-Marie; Brown, Nicola
2017-06-01
The purpose of the study was to examine the health effects of 8 weeks of recreational badminton in untrained women. Participants were matched for maximal oxygen uptake (V̇O 2max ) and body fat percentage and assigned to either a badminton (n = 14), running (n = 14) or control group (n = 8). Assessments were conducted pre- and post-intervention with physiological, anthropometric, motivation to exercise and physical self-esteem data collected. Post-intervention, V̇O 2max increased (P < 0.05) by 16% and 14% in the badminton and running groups, respectively, and time to exhaustion increased (P < 0.05) by 19% for both interventions. Maximal power output was increased (P < 0.05) by 13% in the badminton group only. Blood pressure, resting heart rate and heart rate during submaximal running were lower (P < 0.05) in both interventions. Perceptions of physical conditioning increased (P < 0.05) in both interventions. There were increases (P < 0.05) in enjoyment and ill health motives in the running group only, whilst affiliation motives were higher (P < 0.05) for the badminton group only. Findings suggest that badminton should be considered a strategy to improving the health and well-being of untrained females who are currently not meeting physical activity guidelines.
Model Description for the SOCRATES Contamination Code
1988-10-21
Special A2-I V ILLUSTRATIONS A Schematic Representaction of the Major Elements or Shuttle Contaminacion Problem .... .............. 3 2 A Diagram of the...Atmospherically Scattered Molecules on Ambient Number Density for the 200, 250, and 300 Km Runs 98 A--I A Plot of the Chi-Square Probability Density Function...are scaled with respect to the far field ambient number density, nD, which leaves only the cross section scaling factor to be determined. This factor
Similarities and differences among half-marathon runners according to their performance level
Morante, Juan Carlos; Gómez-Molina, Josué; García-López, Juan
2018-01-01
This study aimed to identify the similarities and differences among half-marathon runners in relation to their performance level. Forty-eight male runners were classified into 4 groups according to their performance level in a half-marathon (min): Group 1 (n = 11, < 70 min), Group 2 (n = 13, < 80 min), Group 3 (n = 13, < 90 min), Group 4 (n = 11, < 105 min). In two separate sessions, training-related, anthropometric, physiological, foot strike pattern and spatio-temporal variables were recorded. Significant differences (p<0.05) between groups (ES = 0.55–3.16) and correlations with performance were obtained (r = 0.34–0.92) in training-related (experience and running distance per week), anthropometric (mass, body mass index and sum of 6 skinfolds), physiological (VO2max, RCT and running economy), foot strike pattern and spatio-temporal variables (contact time, step rate and length). At standardized submaximal speeds (11, 13 and 15 km·h-1), no significant differences between groups were observed in step rate and length, neither in contact time when foot strike pattern was taken into account. In conclusion, apart from training-related, anthropometric and physiological variables, foot strike pattern and step length were the only biomechanical variables sensitive to half-marathon performance, which are essential to achieve high running speeds. However, when foot strike pattern and running speeds were controlled (submaximal test), the spatio-temporal variables were similar. This indicates that foot strike pattern and running speed are responsible for spatio-temporal differences among runners of different performance level. PMID:29364940
Carter, Anne J; Hall, Emily J
2018-02-01
Increasing numbers of people are running with their dogs, particularly in harness through the sport canicross. Whilst canicross races are typically held in the winter months, some human centred events are encouraging running with dogs in summer months, potentially putting dogs at risk of heat related injuries, including heatstroke. The aim of this project was to investigate the effects of ambient conditions and running speed on post-race temperature of canicross dogs in the UK, and investigate the potential risk of heatstroke to canicross racing dogs. The effects of canine characteristics (e.g. gender, coat colour) were explored in order to identify factors that could increase the risk of exercise-induced hyperthermia (defined as body temperature exceeding the upper normal limit of 38.8°C).108 dogs were recruited from 10 race days, where ambient conditions ranged from - 5 to 11°C measured as universal thermal comfort index (UTCI). 281 post race tympanic membrane temperatures were recorded, ranging from 37.0-42.5°C. There was a weak correlation between speed and post-race temperature (r = 0.269, P < 0.001). Whilst no correlation between any single environmental factor or UTCI and post-race temperature was found, the proportion of dogs developing exercise-induced hyperthermia during the race increased with UTCI (r = 0.688, P = 0.028). Male dogs (χ(1) = 18.286, P < 0.001), and dark coated dogs (χ(2) = 8.234, P = 0.014), were significantly more likely to finish the race with a temperature exceeding 40.6°C. Prolonged elevati°n of body temperature above this temperature is likely to cause heatstroke. At every race dogs exceeded this critical temperature, with 10.7% (n = 30) of the overall study population exceeding this temperature throughout the study period. The results suggest male dogs, dark coloured dogs, and increased speed of running all increase the risk of heatstroke in racing canicross dogs. Further research is required to investigate the impact of environmental conditions on post-race cooling, to better understand safe running conditions for dogs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tanda, Giovanni; Knechtle, Beat
2013-01-01
Background The purpose of this study was to investigate the effect of anthropometric characteristics and training indices on marathon race times in recreational male marathoners. Methods Training and anthropometric characteristics were collected for a large cohort of recreational male runners (n = 126) participating in the Basel marathon in Switzerland between 2010 and 2011. Results Among the parameters investigated, marathon performance time was found to be affected by mean running speed and the mean weekly distance run during the training period prior to the race and by body fat percentage. The effect of body fat percentage became significant as it exceeded a certain limiting value; for a relatively low body fat percentage, marathon performance time correlated only with training indices. Conclusion Marathon race time may be predicted (r = 0.81) for recreational male runners by the following equation: marathon race time (minutes) = 11.03 + 98.46 exp(−0.0053 mean weekly training distance [km/week]) + 0.387 mean training pace (sec/km) + 0.1 exp(0.23 body fat percentage [%]). The marathon race time results were valid over a range of 165–266 minutes. PMID:24379719
Exercise Effects on Tumorigenesis in a p53-deficient Mouse Model of Breast Cancer
Colbert, Lisa H.; Westerlind, Kim C.; Perkins, Susan N.; Haines, Diana C.; Berrigan, David; Donehower, Lawrence A.; Fuchs-Young, Robin; Hursting, Stephen D.
2011-01-01
Purpose Physically active women have a reduced risk of breast cancer, but the dose of activity necessary and the role of energy balance and other potential mechanisms have not been fully explored in animal models. We examined treadmill and wheel running effects on mammary tumorigenesis and biomarkers in p53-deficient (p53+/−): MMTV-Wnt-1 transgenic mice. Methods Female mice (9 wks old) were randomly assigned to the following groups in Experiment 1: treadmill exercise 5 d/wk, 45 min/d, 5% grade at 20 m/min, ~0.90 km/d (TREX1, n=20); at 24 m/min, ~1.08 km/d (TREX2, n=21); or a non-exercise control (CON-TREX, n=22). In Experiment 2, mice were randomly assigned to voluntary wheel-running (WHL, n=21, 2.46 ± 1.11 km/d (mean ± SD)) or a non-exercise control (CON-WHL, n=22). Body composition was measured at ~9 weeks and serum insulin-like growth factor-1 (IGF-1) at 2–3 monthly time points beginning at ~9 weeks on study. Mice were sacrificed when tumors reached 1.5 cm, mice became moribund, or there was only one mouse per treatment group remaining. Results TREX1 (24 wks) and TREX2 (21 wks) had shorter survival median survival times than CON-TREX (34 wks; p<0.01); WHL and CON-WHL survival was similar (23 vs. 24 wks; p=0.32). TREX2 had increased multiplicity of mammary gland carcinomas compared to CON-TREX; WHL had a higher tumor incidence than CON-WHL. All exercising animals were lighter than their respective controls, and WHL had lower body fat than CON-WHL (p<0.01). There was no difference in IGF-1 between groups (p>0.05). Conclusion Despite beneficial or no effects on body weight, body fat, or IGF-1, exercise had detrimental effects on tumorigenesis in this p53-deficient mouse model of spontaneous mammary cancer. PMID:19568200
Direct dynamics simulation of the impact phase in heel-toe running.
Gerritsen, K G; van den Bogert, A J; Nigg, B M
1995-06-01
The influence of muscle activation, position and velocities of body segments at touchdown and surface properties on impact forces during heel-toe running was investigated using a direct dynamics simulation technique. The runner was represented by a two-dimensional four- (rigid body) segment musculo-skeletal model. Incorporated into the muscle model were activation dynamics, force-length and force-velocity characteristics of seven major muscle groups of the lower extremities: mm. glutei, hamstrings, m. rectus femoris, mm. vasti, m. gastrocnemius, m. soleus and m. tibialis anterior. The vertical force-deformation characteristics of heel, shoe and ground were modeled by a non-linear visco-elastic element. The maximum of a typical simulated impact force was 1.6 times body weight. The influence of muscle activation was examined by generating muscle stimulation combinations which produce the same (experimentally determined) resultant joint moments at heelstrike. Simulated impact peak forces with these different combinations of muscle stimulation levels varied less than 10%. Without this restriction on initial joint moments, muscle activation had potentially a much larger effect on impact force. Impact peak force was to a great extent influenced by plantar flexion (85 N per degree of change in foot angle) and vertical velocity of the heel (212 N per 0.1 m s-1 change in velocity) at touchdown. Initial knee flexion (68 N per degree of change in leg angle) also played a role in the absorption of impact. Increased surface stiffness resulted in higher impact peak forces (60 N mm-1 decrease in deformation).(ABSTRACT TRUNCATED AT 250 WORDS)
Leadership Class Configuration Interaction Code - Status and Opportunities
NASA Astrophysics Data System (ADS)
Vary, James
2011-10-01
With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).
Physical self-perception and motor performance in normal-weight, overweight and obese children.
Morano, M; Colella, D; Robazza, C; Bortoli, L; Capranica, L
2011-06-01
The aim of this study was to examine the relationships among physical self-perception, body image and motor performance in Italian middle school students. Two hundred and sixty children were categorized into normal-weight (n=103), overweight (n=86) or obese (n=71) groups. Perceived coordination, body fat and sports competence were assessed using the Physical Self-Description Questionnaire, while body image was measured using Collins' Child Figure Drawings. Individuals' perceptions of strength, speed and agility were assessed using the Perceived Physical Ability Scale. Tests involving the standing long jump, 2 kg medicine-ball throw, 10 × 5 m shuttle-run and 20 and 30 m sprints were also administered. Girls, when compared with boys, and overweight and obese participants, when compared with normal-weight peers, reported lower perceived and actual physical competence, higher perceived body fat and greater body dissatisfaction. Body dissatisfaction mediated all the associations between body mass index (BMI) and the different aspects of physical self-perception in boys, but not in girls. The same pattern of results was found for physical self-perception as a mediator of the relationship between BMI and body dissatisfaction. In conclusion, obesity proved to have adverse effects on both motor performance and physical self-perception. © 2010 John Wiley & Sons A/S.
Walking, running and the evolution of short toes in humans.
Rolian, Campbell; Lieberman, Daniel E; Hamill, Joseph; Scott, John W; Werbel, William
2009-03-01
The phalangeal portion of the forefoot is extremely short relative to body mass in humans. This derived pedal proportion is thought to have evolved in the context of committed bipedalism, but the benefits of shorter toes for walking and/or running have not been tested previously. Here, we propose a biomechanical model of toe function in bipedal locomotion that suggests that shorter pedal phalanges improve locomotor performance by decreasing digital flexor force production and mechanical work, which might ultimately reduce the metabolic cost of flexor force production during bipedal locomotion. We tested this model using kinematic, force and plantar pressure data collected from a human sample representing normal variation in toe length (N=25). The effect of toe length on peak digital flexor forces, impulses and work outputs was evaluated during barefoot walking and running using partial correlations and multiple regression analysis, controlling for the effects of body mass, whole-foot and phalangeal contact times and toe-out angle. Our results suggest that there is no significant increase in digital flexor output associated with longer toes in walking. In running, however, multiple regression analyses based on the sample suggest that increasing average relative toe length by as little as 20% doubles peak digital flexor impulses and mechanical work, probably also increasing the metabolic cost of generating these forces. The increased mechanical cost associated with long toes in running suggests that modern human forefoot proportions might have been selected for in the context of the evolution of endurance running.
Belke, Terry W; Pierce, W David
2009-02-01
Twelve female Long-Evans rats were exposed to concurrent variable (VR) ratio schedules of sucrose and wheel-running reinforcement (Sucrose VR 10 Wheel VR 10; Sucrose VR 5 Wheel VR 20; Sucrose VR 20 Wheel VR 5) with predetermined budgets (number of responses). The allocation of lever pressing to the sucrose and wheel-running alternatives was assessed at high and low body weights. Results showed that wheel-running rate and lever-pressing rates for sucrose and wheel running increased, but the choice of wheel running decreased at the low body weight. A regression analysis of relative consumption as a function of relative price showed that consumption shifted toward sucrose and interacted with price differences in a manner consistent with increased substitutability. Demand curves showed that demand for sucrose became less elastic while demand for wheel running became more elastic at the low body weight. These findings reflect an increase in the difference in relative value of sucrose and wheel running as body weight decreased. Discussion focuses on the limitations of response rates as measures of reinforcement value. In addition, we address the commonalities between matching and demand curve equations for the analysis of changes in relative reinforcement value.
Effects of independently altering body weight and body mass on the metabolic cost of running.
Teunissen, Lennart P J; Grabowski, Alena; Kram, Rodger
2007-12-01
The metabolic cost of running is substantial, despite the savings from elastic energy storage and return. Previous studies suggest that generating vertical force to support body weight and horizontal forces to brake and propel body mass are the major determinants of the metabolic cost of running. In the present study, we investigated how independently altering body weight and body mass affects the metabolic cost of running. Based on previous studies, we hypothesized that reducing body weight would decrease metabolic rate proportionally, and adding mass and weight would increase metabolic rate proportionally. Further, because previous studies show that adding mass alone does not affect the forces generated on the ground, we hypothesized that adding mass alone would have no substantial effect on metabolic rate. We manipulated the body weight and body mass of 10 recreational human runners and measured their metabolic rates while they ran at 3 m s(-1). We reduced weight using a harness system, increased mass and weight using lead worn about the waist, and increased mass alone using a combination of weight support and added load. We found that net metabolic rate decreased in less than direct proportion to reduced body weight, increased in slightly more than direct proportion to added load (added mass and weight), and was not substantially different from normal running with added mass alone. Adding mass alone was not an effective method for determining the metabolic cost attributable to braking/propelling body mass. Runners loaded with mass alone did not generate greater vertical or horizontal impulses and their metabolic costs did not substantially differ from those of normal running. Our results show that generating force to support body weight is the primary determinant of the metabolic cost of running. Extrapolating our reduced weight data to zero weight suggests that supporting body weight comprises at most 74% of the net cost of running. However, 74% is probably an overestimate of the metabolic demand of body weight to support itself because in reduced gravity conditions decrements in horizontal impulse accompanied decrements in vertical impulse.
Belke, Terry W; Pierce, W David; Jensen, K
2004-07-30
A biobehavioural analysis of activity anorexia suggests that the motivation for physical activity is regulated by food supply and body weight. In the present experiment, food allocation was varied within subjects by prefeeding food-deprived rats 0, 5, 10 and 15 g of food before sessions of lever pressing for wheel-running reinforcement. The experiment assessed the effects of prefeeding on rates of wheel running, lever pressing, and postreinforcement pausing. Results showed that prefeeding animals 5 g of food had no effect. Prefeeding 10 g of food reduced lever pressing for wheel running and rates of wheel running without a significant change in body weight; the effect was, however, transitory. Prefeeding 15 g of food increased the animals' body weights, resulting in a sustained decrease of wheel running and lever pressing, and an increase in postreinforcement pausing. Overall the results indicate that the motivation for physical activity is regulated by changes in local food supply, but is sustained only when there is a concomitant change in body weight.
TIM, a ray-tracing program for METATOY research and its dissemination
NASA Astrophysics Data System (ADS)
Lambert, Dean; Hamilton, Alasdair C.; Constable, George; Snehanshu, Harsh; Talati, Sharvil; Courtial, Johannes
2012-03-01
TIM (The Interactive METATOY) is a ray-tracing program specifically tailored towards our research in METATOYs, which are optical components that appear to be able to create wave-optically forbidden light-ray fields. For this reason, TIM possesses features not found in other ray-tracing programs. TIM can either be used interactively or by modifying the openly available source code; in both cases, it can easily be run as an applet embedded in a web page. Here we describe the basic structure of TIM's source code and how to extend it, and we give examples of how we have used TIM in our own research. Program summaryProgram title: TIM Catalogue identifier: AEKY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 124 478 No. of bytes in distributed program, including test data, etc.: 4 120 052 Distribution format: tar.gz Programming language: Java Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6 Operating system: Any; developed under Mac OS X Version 10.6 RAM: Typically 145 MB (interactive version running under Mac OS X Version 10.6) Classification: 14, 18 External routines: JAMA [1] (source code included) Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories; can visualise geometric optic transformations; can create anaglyphs (for viewing with coloured "3D glasses") and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene.
Pope, Daniel J.
2011-01-01
In the aftermath of the London ‘7/7’ attacks in 2005, UK government agencies required the development of a quick-running tool to predict the weapon and injury effects caused by the initiation of a person borne improvised explosive device (PBIED) within crowded metropolitan environments. This prediction tool, termed the HIP (human injury predictor) code, was intended to: — assist the security services to encourage favourable crowd distributions and densities within scenarios of ‘sensitivity’;— provide guidance to security engineers concerning the most effective location for protection systems;— inform rescue services as to where, in the case of such an event, individuals with particular injuries will be located;— assist in training medical personnel concerning the scope and types of injuries that would be sustained as a consequence of a particular attack;— assist response planners in determining the types of medical specialists (burns, traumatic amputations, lungs, etc.) required and thus identify the appropriate hospitals to receive the various casualty types.This document describes the algorithms used in the development of this tool, together with the pertinent underpinning physical processes. From its rudimentary beginnings as a simple spreadsheet, the HIP code now has a graphical user interface (GUI) that allows three-dimensional visualization of results and intuitive scenario set-up. The code is underpinned by algorithms that predict the pressure and momentum outputs produced by PBIEDs within open and confined environments, as well as the trajectories of shrapnel deliberately placed within the device to increase injurious effects. Further logic has been implemented to transpose these weapon effects into forms of human injury depending on where individuals are located relative to the PBIED. Each crowd member is subdivided into representative body parts, each of which is assigned an abbreviated injury score after a particular calculation cycle. The injury levels of each affected body part are then summated and a triage state assigned for each individual crowd member based on the criteria specified within the ‘injury scoring system’. To attain a comprehensive picture of a particular event, it is important that a number of simulations, using what is substantively the same scenario, are undertaken with natural variation being applied to the crowd distributions and the PBIED output. Accurate mathematical representation of such complex phenomena is challenging, particularly as the code must be quick-running to be of use to the stakeholder community. In addition to discussing the background and motivation for the algorithm and GUI development, this document also discusses the steps taken to validate the tool and the plans for further functionality implementation. PMID:21149351
Pope, Daniel J
2011-01-27
In the aftermath of the London '7/7' attacks in 2005, UK government agencies required the development of a quick-running tool to predict the weapon and injury effects caused by the initiation of a person borne improvised explosive device (PBIED) within crowded metropolitan environments. This prediction tool, termed the HIP (human injury predictor) code, was intended to:--assist the security services to encourage favourable crowd distributions and densities within scenarios of 'sensitivity'; --provide guidance to security engineers concerning the most effective location for protection systems; --inform rescue services as to where, in the case of such an event, individuals with particular injuries will be located; --assist in training medical personnel concerning the scope and types of injuries that would be sustained as a consequence of a particular attack; --assist response planners in determining the types of medical specialists (burns, traumatic amputations, lungs, etc.) required and thus identify the appropriate hospitals to receive the various casualty types. This document describes the algorithms used in the development of this tool, together with the pertinent underpinning physical processes. From its rudimentary beginnings as a simple spreadsheet, the HIP code now has a graphical user interface (GUI) that allows three-dimensional visualization of results and intuitive scenario set-up. The code is underpinned by algorithms that predict the pressure and momentum outputs produced by PBIEDs within open and confined environments, as well as the trajectories of shrapnel deliberately placed within the device to increase injurious effects. Further logic has been implemented to transpose these weapon effects into forms of human injury depending on where individuals are located relative to the PBIED. Each crowd member is subdivided into representative body parts, each of which is assigned an abbreviated injury score after a particular calculation cycle. The injury levels of each affected body part are then summated and a triage state assigned for each individual crowd member based on the criteria specified within the 'injury scoring system'. To attain a comprehensive picture of a particular event, it is important that a number of simulations, using what is substantively the same scenario, are undertaken with natural variation being applied to the crowd distributions and the PBIED output. Accurate mathematical representation of such complex phenomena is challenging, particularly as the code must be quick-running to be of use to the stakeholder community. In addition to discussing the background and motivation for the algorithm and GUI development, this document also discusses the steps taken to validate the tool and the plans for further functionality implementation.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
SSME Turbopump Turbine Computations
NASA Technical Reports Server (NTRS)
Jorgenson, P. G. E.
1985-01-01
A two-dimensional viscous code was developed to be used in the prediction of the flow in the SSME high-pressure turbopump blade passages. The rotor viscous code (RVC) employs a four-step Runge-Kutta scheme to solve the two-dimensional, thin-layer Navier-Stokes equations. The Baldwin-Lomax eddy-viscosity model is used for these turbulent flow calculations. A viable method was developed to use the relative exit conditions from an upstream blade row as the inlet conditions to the next blade row. The blade loading diagrams are compared with the meridional values obtained from an in-house quasithree-dimensional inviscid code. Periodic boundary conditions are imposed on a body-fitted C-grid computed by using the GRAPE GRids about Airfoils using Poisson's Equation (GRAPE) code. Total pressure, total temperature, and flow angle are specified at the inlet. The upstream-running Riemann invariant is extrapolated from the interior. Static pressure is specified at the exit such that mass flow is conserved from blade row to blade row, and the conservative variables are extrapolated from the interior. For viscous flows the noslip condition is imposed at the wall. The normal momentum equation gives the pressure at the wall. The density at the wall is obtained from the wall total temperature.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
A Fast Code for Jupiter Atmospheric Entry Analysis
NASA Technical Reports Server (NTRS)
Yauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq
1999-01-01
A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry. The calculation required 3.5 sec of CPU time on a work station, or three to four orders of magnitude less than for previous Jovian entry heat shields. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 13.7% too high. The forebody's mass loss was overpredicted by 5.3% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was satisfactory in view of the code's fast running time and the methods' approximations.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2017-04-01
The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Simulations of Instabilities in Tidal Tails
NASA Astrophysics Data System (ADS)
Comparetta, Justin N.; Quillen, A. C.
2010-05-01
We use graphics cards to run a hybrid test particle/N-body simulation to integrate 4 million massless particle trajectories within fully self-consistent N-body simulations of 128,000 - 256,000 particles. The number of massless particles allows us to resolve fine structure in the spatial distribution and phase space of a dwarf galaxy that is disrupted in the tidal field of a Milky Way type galaxy. The tidal tails exhibit clumping or a smoke-like appearance. By running simulations with different satellite particle mass, number of massive vs massless particles and with and without a galaxy disk, we have determined that the instabilities are not due to numerical noise or shocking as the satellite passes through the disk of the Galaxy. The instability is possibly a result of self-gravity which indicates it may be due to Jeans instabilities. Simulations involving different halo particle mass may suggest limitations on dark matter halo substructure. We find that the instabilities are visible in velocity space as well as real space and thus could be identified from velocity surveys as well as number counts.
NASA Astrophysics Data System (ADS)
Noriega-Mendoza, H.; Aguilar, L. A.
2018-04-01
We performed high precision, N-body simulations of the cold collapse of initially spherical, collisionless systems using the GYRFALCON code of Dehnen (2000). The collapses produce very prolate spheroidal configurations. After the collapse, the systems are simulated for 85 and 170 half-mass radius dynamical timescales, during which energy conservation is better than 0.005%. We use this period to extract individual particle orbits directly from the simulations. We then use the TAXON code of Carpintero and Aguilar (1998) to classify 1 to 1.5% of the extracted orbits from our final, relaxed configurations: less than 15% are chaotic orbits, 30% are box orbits and 60% are tube orbits (long and short axis). Our goal has been to prove that direct orbit extraction is feasible, and that there is no need to "freeze" the final N-body system configuration to extract a time-independent potential.
N-body simulations for f(R) gravity using a self-adaptive particle-mesh code
NASA Astrophysics Data System (ADS)
Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya
2011-02-01
We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.
The linearly scaling 3D fragment method for large scale electronic structure calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-07-28
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
The Linearly Scaling 3D Fragment Method for Large Scale Electronic Structure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Zhengji; Meza, Juan; Lee, Byounghak
2009-06-26
The Linearly Scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) atmore » OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.« less
Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.
2013-01-01
This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151
Effects of Strength Training on Postpubertal Adolescent Distance Runners.
Blagrove, Richard C; Howe, Louis P; Cushion, Emily J; Spence, Adam; Howatson, Glyn; Pedlar, Charles R; Hayes, Philip R
2018-06-01
Strength training activities have consistently been shown to improve running economy (RE) and neuromuscular characteristics, such as force-producing ability and maximal speed, in adult distance runners. However, the effects on adolescent (<18 yr) runners remains elusive. This randomized control trial aimed to examine the effect of strength training on several important physiological and neuromuscular qualities associated with distance running performance. Participants (n = 25, 13 female, 17.2 ± 1.2 yr) were paired according to their sex and RE and randomly assigned to a 10-wk strength training group (STG) or a control group who continued their regular training. The STG performed twice weekly sessions of plyometric, sprint, and resistance training in addition to their normal running. Outcome measures included body mass, maximal oxygen uptake (V˙O2max), speed at V˙O2max, RE (quantified as energy cost), speed at fixed blood lactate concentrations, 20-m sprint, and maximal voluntary contraction during an isometric quarter-squat. Eighteen participants (STG: n = 9, 16.1 ± 1.1 yr; control group: n = 9, 17.6 ± 1.2 yr) completed the study. The STG displayed small improvements (3.2%-3.7%; effect size (ES), 0.31-0.51) in RE that were inferred as "possibly beneficial" for an average of three submaximal speeds. Trivial or small changes were observed for body composition variables, V˙O2max and speed at V˙O2max; however, the training period provided likely benefits to speed at fixed blood lactate concentrations in both groups. Strength training elicited a very likely benefit and a possible benefit to sprint time (ES, 0.32) and maximal voluntary contraction (ES, 0.86), respectively. Ten weeks of strength training added to the program of a postpubertal distance runner was highly likely to improve maximal speed and enhances RE by a small extent, without deleterious effects on body composition or other aerobic parameters.
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
Initial conditions for accurate N-body simulations of massive neutrino cosmologies
NASA Astrophysics Data System (ADS)
Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.
2017-04-01
The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.
Pierce, Joseph R; DeGroot, David W; Grier, Tyson L; Hauret, Keith G; Nindl, Bradley C; East, Whitfield B; McGurk, Michael S; Jones, Bruce H
2017-11-01
Army body composition standards are based upon validated criteria; however, certain field-expedient methodologies (e.g., weight-for-height, body mass index [BMI]) may disqualify individuals from service who may otherwise excel on physical performance and military-relevant tasks. The purpose was to assess soldier physical performance and military-specific task/fitness performance stratified by BMI. Cross-sectional observational study. Male (n=275) and female (n=46) soldiers performed a wide-array of physical fitness tests and military-specific tasks, including the Army physical fitness test (APFT). Within-sex performance data were analyzed by BMI tertile stratification or by Army Body Composition Program (ABCP) weight-for-height (calculated BMI) screening standards using ANOVA/Tukey post-hoc or independent t-tests, respectively. BMI stratification (higher vs. lower BMI) was associated with significant improvements in muscular strength and power, but also with decrements in speed/agility in male and female soldiers. Within the military specific tasks, a higher BMI was associated with an increased APFT 2-Mile Run time; however, performance on a 1600-m Loaded March or a Warrior Task and Battle Drill obstacle course was not related to BMI in either sex. Male and Female soldiers who did not meet ABCP screening standards demonstrated a slower 2-Mile Run time; however, not meeting the ABCP BMI standard only affected a minimal number (∼6%) of soldiers' ability to pass the APFT. Military body composition standards require a careful balance between physical performance, health, and military readiness. Allowances should be considered where tradeoffs exist between body composition classifications and performance on physical tasks with high military relevance. Published by Elsevier Ltd.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
Plasma Interactions With Spacecraft (I)
2009-04-01
with the Windows, Red hat LINUX, and MacOS X environments. We wrote N2kScriptRunner, a C++ code that runs a Nascap-2k script outside of the Java ...console-based and with a Java interface), a stand alone program that reads and writes Nascap-2k database files. This program has proved invaluable...surface currents for DSX and prototyped it in Java . A description of the algorithm and the prototype implementation is in Section 3. 1.5. DSX
BHDD: Primordial black hole binaries code
NASA Astrophysics Data System (ADS)
Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco
2018-06-01
BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J
2013-04-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
A comparison of cosmological hydrodynamic codes
NASA Technical Reports Server (NTRS)
Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.
1994-01-01
We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.
SLHAplus: A library for implementing extensions of the standard model
NASA Astrophysics Data System (ADS)
Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.
2011-03-01
We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
Statistical Analysis of CFD Solutions from the Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
2002-01-01
A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.
NASA Astrophysics Data System (ADS)
Tóth, Gábor; Keppens, Rony
2012-07-01
The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.
NEQAIR96,Nonequilibrium and Equilibrium Radiative Transport and Spectra Program: User's Manual
NASA Technical Reports Server (NTRS)
Whiting, Ellis E.; Park, Chul; Liu, Yen; Arnold, James O.; Paterson, John A.
1996-01-01
This document is the User's Manual for a new version of the NEQAIR computer program, NEQAIR96. The program is a line-by-line and a line-of-sight code. It calculates the emission and absorption spectra for atomic and diatomic molecules and the transport of radiation through a nonuniform gas mixture to a surface. The program has been rewritten to make it easy to use, run faster, and include many run-time options that tailor a calculation to the user's requirements. The accuracy and capability have also been improved by including the rotational Hamiltonian matrix formalism for calculating rotational energy levels and Hoenl-London factors for dipole and spin-allowed singlet, doublet, triplet, and quartet transitions. Three sample cases are also included to help the user become familiar with the steps taken to produce a spectrum. A new user interface is included that uses check location, to select run-time options and to enter selected run data, making NEQAIR96 easier to use than the older versions of the code. The ease of its use and the speed of its algorithms make NEQAIR96 a valuable educational code as well as a practical spectroscopic prediction and diagnostic code.
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
Oxygen Limited Bioreactors System For Nitrogen Removal Using Immobilized Mix Culture
NASA Astrophysics Data System (ADS)
Pathak, B. K.; Sumino, T.; Saiki, Y.; Kazama, F.
2005-12-01
Recently nutrients concentrations especially nitrogen in natural water is alarming in the world wide. Most of the effort is being done on the removal of high concentration of nitrogen especially from the wastewater treatment plants. The removal efficiency is targeted in all considering the effluent discharge standard set by the national environment agency. In many cases, it does not meet the required standard and receiving water is being polluted. Eutrophication in natural water bodies has been reported even if the nitrogen concentration is low and self purification of natural systems itself is not sufficient to remove the nitrogen due to complex phenomenon. In order to recover the pristine water environment, it is very essential to explore bioreactor systems for natural water systems using immobilized mix culture. Microorganism were entrapped in Polyethylene glycol (PEG) prepolymer gel and cut into 3mm cubic immobilized pellets. Four laboratory scale micro bio-reactors having 0.1 L volumes were packed with immobilized pellets with 50% compact ratio. RUN1, RUN2, RUN3 and RUN4 were packed with immobilized pellets from reservoirs sediments, activated sludge (AS), mixed of AS, AG and biodegradable plastic and anaerobic granules (AG) respectively. Water from Shiokawa Reservoirs was feed to all reactors with supplemental ammonia and nitrite nitrogen as specified in the results and discussions. The reactors were operated dark incubated room in continuous flow mode with hydraulic retention time of 12 hours under oxygen limiting condition. Ammonium, nitrate nitrite nitrogen and total organic carbon (TOC) concentrations were measured as described in APWA and AWWA (1998). Laboratory scale four bioreactors containing different combination of immobilized cell were monitored for 218 days. Influent NH4+-N and NO2--N concentration were 2.27±0.43 and 2.05±0.41 mg/l respectively. Average dissolved oxygen concentration and pH in the reactors were 0.40-2.5 mg/l and pH 6.5-7.4 respectively. The molar ratio of NO2-N and NH4+-N was varied from 0.85 to 4.1 and RUN3 has closed to Stoichiometric ratio of anaerobic ammonia oxidation process. Total nitrogen removal in all reactors was ranged from 11-79% and RUN3 showed best removal performance (Table 1). Table 1 Characteristic of N removal process Parameters RUN1 RUN2 RUN3 RUN4 Effluent TOC (mg/l) 1.22 2.08 2.33 1.97 NO2- -N/ NH4+-N converted 1.18 0.85 1.32 4.15 Average NH4+-N removal % 86 95 74 32 Average NO2- -N removal % 97 81 98 92 Average TN removal % 11 36 79 59 Four different kinds of laboratory scale nitrogen removal bio-rectors were monitored for 218 days. Comparing reactors based on observed data, the bioreactor containing mix culture (RUN3) removed the 79% of incoming total nitrogen and suggests best for nitrogen removal in the natural water systems. It is recommended that further study is required in pilot scale to understand scaling effects and other natural phenomenon.
Comparison of Upright Gait with Supine Bungee-Cord Gait
NASA Technical Reports Server (NTRS)
Boda, Wanda L.; Hargens, Alan R.; Campbell, J. A.; Yang, C.; Holton, Emily M. (Technical Monitor)
1998-01-01
Running on a treadmill with bungee-cord resistance is currently used on the Russian space station MIR as a countermeasure for the loss of bone and muscular strength which occurs during spaceflight. However, it is unknown whether ground reaction force (GRF) at the feet using bungee-cord resistance is similar to that which occurs during upright walking and running on Earth. We hypothesized-that the DRAMs generated during upright walking and running are greater than the DRAMs generated during supine bungee-cord gait. Eleven healthy subjects walked (4.8 +/- 0.13 km/h, mean +/- SE) and ran (9.1 +/- 0.51 km/h) during upright and supine bungee-cord exercise on an active treadmill. Subjects exercised for 3 min in each condition using a resistance of 1 body weight calibrated during an initial, stationary standing position. Data were sampled at a frequency of 500Hz and the mean of 3 trials was analyzed for each condition. A repeated measures analysis of variance tested significance between the conditions. Peak DRAMs during upright walking were significantly greater (1084.9 +/- 111.4 N) than during supine bungee-cord walking (770.3 +/- 59.8 N; p less than 0.05). Peak GRFs were also significantly greater for upright running (1548.3 +/- 135.4 N) than for supine bungee-cord running (1099.5 +/- 158.46 N). Analysis of GRF curves indicated that forces decreased throughout the stance phase for bungee-cord gait but not during upright gait. These results indicate that bungee-cord exercise may not create sufficient loads at the feet to counteract the loss of bone and muscular strength that occurs during long-duration exposure to microgravity.
Experience with a vectorized general circulation weather model on Star-100
NASA Technical Reports Server (NTRS)
Soll, D. B.; Habra, N. R.; Russell, G. L.
1977-01-01
A version of an atmospheric general circulation model was vectorized to run on a CDC STAR 100. The numerical model was coded and run in two different vector languages, CDC and LRLTRAN. A factor of 10 speed improvement over an IBM 360/95 was realized. Efficient use of the STAR machine required some redesigning of algorithms and logic. This precludes the application of vectorizing compilers on the original scalar code to achieve the same results. Vector languages permit a more natural and efficient formulation for such numerical codes.
GRADSPMHD: A parallel MHD code based on the SPH formalism
NASA Astrophysics Data System (ADS)
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.
An N-body Integrator for Planetary Rings
NASA Astrophysics Data System (ADS)
Hahn, Joseph M.
2011-04-01
A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.
Drag Prediction for the NASA CRM Wing-Body-Tail Using CFL3D and OVERFLOW on an Overset Mesh
NASA Technical Reports Server (NTRS)
Sclafani, Anthony J.; DeHaan, Mark A.; Vassberg, John C.; Rumsey, Christopher L.; Pulliam, Thomas H.
2010-01-01
In response to the fourth AIAA CFD Drag Prediction Workshop (DPW-IV), the NASA Common Research Model (CRM) wing-body and wing-body-tail configurations are analyzed using the Reynolds-averaged Navier-Stokes (RANS) flow solvers CFL3D and OVERFLOW. Two families of structured, overset grids are built for DPW-IV. Grid Family 1 (GF1) consists of a coarse (7.2 million), medium (16.9 million), fine (56.5 million), and extra-fine (189.4 million) mesh. Grid Family 2 (GF2) is an extension of the first and includes a superfine (714.2 million) and an ultra-fine (2.4 billion) mesh. The medium grid anchors both families with an established build process for accurate cruise drag prediction studies. This base mesh is coarsened and enhanced to form a set of parametrically equivalent grids that increase in size by a factor of roughly 3.4 from one level to the next denser level. Both CFL3D and OVERFLOW are run on GF1 using a consistent numerical approach. Additional OVERFLOW runs are made to study effects of differencing scheme and turbulence model on GF1 and to obtain results for GF2. All CFD results are post-processed using Richardson extrapolation, and approximate grid-converged values of drag are compared. The medium grid is also used to compute a trimmed drag polar for both codes.
Fitness profiling in handball: physical and physiological characteristics of elite players.
Sporis, Goran; Vuleta, Dinko; Vuleta, Dinko; Milanović, Dragan
2010-09-01
The purpose of this study was to describe the structural and functional characteristics of elite Croatian handball players and to evaluate whether the players in different positional roles have different physical and physiological profiles. According to the positional roles, players were categorized as goalkeepers (n = 13), wing players (n = 26), backcourt players (n = 28) and pivot players (n = 25). The goalkeepers were older (p < 0.01), and the pivot players were more experienced (p < 0.01) than the backcourt players. The wings were the shortest players in the team. The pivots were tallest and heavier than the backcourt and wing players (p < 0.01), whereas the backcourt players were tallest then wings (p < 0.01). Goalkeepers had more body fat than the backcourt and wing players (p < 0.01). The backcourt players had a lower percentage of body fat. The backcourt players were the quickest players in the team when looking at values of maximal running speed on a treadmill. The Goalkeepers were the slowest players in the team (p < 0.01). The best average results concerning maximal heart rate were detected among the backcourt players. There were no statistically significant differences between the players'positions when measuring blood lactate and maximal heart rate. A strong negative correlation was found between body fat and maximal running speed (r = -0.68, p < 0.01). Coaches are able to use this information to determine which type of profile is needed for a specific position. Experienced coaches can used this information in the process of designing a training program to maximize the fitness development of handball players, with one purpose only, to achieve success in handball.
Genetically improved BarraCUDA.
Langdon, W B; Lam, Brian Yee Hong
2017-01-01
BarraCUDA is an open source C program which uses the BWA algorithm in parallel with nVidia CUDA to align short next generation DNA sequences against a reference genome. Recently its source code was optimised using "Genetic Improvement". The genetically improved (GI) code is up to three times faster on short paired end reads from The 1000 Genomes Project and 60% more accurate on a short BioPlanet.com GCAT alignment benchmark. GPGPU BarraCUDA running on a single K80 Tesla GPU can align short paired end nextGen sequences up to ten times faster than bwa on a 12 core server. The speed up was such that the GI version was adopted and has been regularly downloaded from SourceForge for more than 12 months.
Recognition of military-specific physical activities with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2010-11-01
The purpose of this study was to develop and validate an algorithm for recognizing military-specific, physically demanding activities using body-fixed sensors. To develop the algorithm, the first group of study participants (n = 15) wore body-fixed sensors capable of measuring acceleration, step frequency, and heart rate while completing six military-specific activities: walking, marching with backpack, lifting and lowering loads, lifting and carrying loads, digging, and running. The accuracy of the algorithm was tested in these isolated activities in a laboratory setting (n = 18) and in the context of daily military training routine (n = 24). The overall recognition rates during isolated activities and during daily military routine activities were 87.5% and 85.5%, respectively. We conclude that the algorithm adequately recognized six military-specific physical activities based on sensor data alone both in a laboratory setting and in the military training environment. By recognizing type of physical activities this objective method provides additional information on military-job descriptions.
Compressional Alfvén eigenmodes in rotating spherical tokamak plasmas
Smith, H. M.; Fredrickson, E. D.
2017-02-07
Spherical tokamaks often have a considerable toroidal plasma rotation of several tens of kHz. Compressional Alfvén eigenmodes in such devices therefore experience a frequency shift, which if the plasma were rotating as a rigid body, would be a simple Doppler shift. However, since the rotation frequency depends on minor radius, the eigenmodes are affected in a more complicated way. The eigenmode solver CAE3B (Smith et al 2009 Plasma Phys. Control. Fusion 51 075001) has been extended to account for toroidal plasma rotation. The results show that the eigenfrequency shift due to rotation can be approximated by a rigid body rotationmore » with a frequency computed from a spatial average of the real rotation profile weighted with the eigenmode amplitude. To investigate the effect of extending the computational domain to the vessel wall, a simplified eigenmode equation, yet retaining plasma rotation, is solved by a modified version of the CAE code used in Fredrickson et al (2013 Phys. Plasmas 20 042112). Lastly, both solving the full eigenmode equation, as in the CAE3B code, and placing the boundary at the vessel wall, as in the CAE code, significantly influences the calculated eigenfrequencies.« less
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
X-Antenna: A graphical interface for antenna analysis codes
NASA Technical Reports Server (NTRS)
Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.
1995-01-01
This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.
User's and test case manual for FEMATS
NASA Technical Reports Server (NTRS)
Chatterjee, Arindam; Volakis, John; Nurnberger, Mike; Natzke, John
1995-01-01
The FEMATS program incorporates first-order edge-based finite elements and vector absorbing boundary conditions into the scattered field formulation for computation of the scattering from three-dimensional geometries. The code has been validated extensively for a large class of geometries containing inhomogeneities and satisfying transition conditions. For geometries that are too large for the workstation environment, the FEMATS code has been optimized to run on various supercomputers. Currently, FEMATS has been configured to run on the HP 9000 workstation, vectorized for the Cray Y-MP, and parallelized to run on the Kendall Square Research (KSR) architecture and the Intel Paragon.
Benchmarking NNWSI flow and transport codes: COVE 1 results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, N.K.
1985-06-01
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less
NASA Technical Reports Server (NTRS)
Junge, M. K.; Giacomi, M. J.
1981-01-01
The results of a human factors test to assay the suitability of a prototype general purpose work station (GPWS) for biosciences experiments on the fourth Spacelab mission are reported. The evaluation was performed to verify that users of the GPWS would optimally interact with the GPWS configuration and instrumentation. Six male subjects sat on stools positioned to allow assimilation of the zero-g body posture. Trials were run concerning the operator viewing angles facing the console, the console color, procedures for injecting rates with dye, a rat blood cell count, mouse dissection, squirrel monkey transfer, and plant fixation. The trials were run for several days in order to gage improvement or poor performance conditions. Better access to the work surface was found necessary, together with more distinct and better located LEDs, better access window latches, clearer sequences on control buttons, color-coded sequential buttons, and provisions made for an intercom system when operators of the GPWS work in tandem.
Orthogonal patterns in binary neural networks
NASA Technical Reports Server (NTRS)
Baram, Yoram
1988-01-01
A binary neural network that stores only mutually orthogonal patterns is shown to converge, when probed by any pattern, to a pattern in the memory space, i.e., the space spanned by the stored patterns. The latter are shown to be the only members of the memory space under a certain coding condition, which allows maximum storage of M=(2N) sup 0.5 patterns, where N is the number of neurons. The stored patterns are shown to have basins of attraction of radius N/(2M), within which errors are corrected with probability 1 in a single update cycle. When the probe falls outside these regions, the error correction capability can still be increased to 1 by repeatedly running the network with the same probe.
Physical activity prevents augmented body fat accretion in moderately iron-deficient rats.
McClung, James P; Andersen, Nancy E; Tarr, Tyson N; Stahl, Chad H; Young, Andrew J
2008-07-01
Recent studies describe an association between poor iron status and obesity in humans, although the mechanism explaining this relationship is unclear. The present study aimed to determine the effect of moderate iron deficiency and physical activity (PA) on body composition in an animal model. Male Sprague-Dawley rats consumed iron-adequate (IA; 40 mg/kg) or moderately iron-deficient (ID; 9 mg/kg) diets ad libitum for 12 wk. Rats were assigned to 4 treatment groups (n = 10 per group): IA, sedentary (IAS); IA, PA (IAPA); ID, sedentary (IDS); or ID, PA (IDPA). Activity involved running on motorized running wheels at 4 m/min for 1 h/d for 5 d/wk. After 12 wk, ID rats were not anemic, but body iron stores were reduced as indicated by diminished (P < 0.05) femur iron compared with IA rats. Treatment group did not affect body weight or feed consumption. However, fat mass was greater (P < 0.05) in IDS rats (38.6 +/- 6.7%) than IAS (31.8 +/- 2.9%), IAPA (31.8 +/- 2.0%), and IDPA (32.8 +/- 4.5%) rats. Furthermore, lean body mass was diminished in IDS rats (58.7 +/- 6.8%) compared with IAS (65.6 +/- 3.0%), IAPA (65.6 +/- 2.1%), and IDPA (64.7 +/- 4.5%) rats. Thus, moderate iron deficiency may cause increased body fat accretion in rats and PA attenuates that effect.
Responding for sucrose and wheel-running reinforcement: effect of body weight manipulation.
Belke, Terry W
2004-02-27
As body weight increases, the excitatory strength of a stimulus signaling an opportunity to run should weaken to a greater degree than that of a stimulus signaling an opportunity to eat. To test this hypothesis, six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. The effect of varying body weight on responding maintained by these two reinforcers was investigated by systematically increasing and decreasing post-session food amounts. The initial body weight was 335 g. Body weights were increased to approximately 445 g and subsequently returned to 335 g. As body weight increased, overall and local lever-pressing rates decreased while post-reinforcement pauses lengthened. Analysis of post-reinforcement pauses and local lever-pressing rates in terms of transitions between successive reinforcers revealed that local response rates in the presence of stimuli signaling upcoming wheel and sucrose reinforcers were similarly affected. However, pausing in the presence of the stimulus signaling a wheel-running reinforcer lengthened to a greater extent than did pausing in the presence of the stimulus signaling sucrose. This result suggests that as body weight approaches ad-lib levels, the likelihood of initiation of responding to obtain an opportunity to run approaches zero and the animal "rejects" the opportunity to run in a manner similar to the rejection of less preferred food items in studies of food selectivity.
Accurate double many-body expansion potential energy surface for the 2(1)A' state of N2O.
Li, Jing; Varandas, António J C
2014-08-28
An accurate double many-body expansion potential energy surface is reported for the 2(1)A' state of N2O. The new double many-body expansion (DMBE) form has been fitted to a wealth of ab initio points that have been calculated at the multi-reference configuration interaction level using the full-valence-complete-active-space wave function as reference and the cc-pVQZ basis set, and subsequently corrected semiempirically via double many-body expansion-scaled external correlation method to extrapolate the calculated energies to the limit of a complete basis set and, most importantly, the limit of an infinite configuration interaction expansion. The topographical features of the novel potential energy surface are then examined in detail and compared with corresponding attributes of other potential functions available in the literature. Exploratory trajectories have also been run on this DMBE form with the quasiclassical trajectory method, with the thermal rate constant so determined at room temperature significantly enhancing agreement with experimental data.
Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes
NASA Technical Reports Server (NTRS)
Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.
1989-01-01
The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.
Chilibeck, Philip D; Magnus, Charlene; Anderson, Matthew
2007-12-01
Rugby union football requires muscular strength and endurance, as well as aerobic endurance. Creatine supplementation may enhance muscular performance, but it is unclear if it would interfere with aerobic endurance during running because of increased body mass. The purpose of this study was to determine if creatine supplementation during 8 weeks of a season of rugby union football can increase muscular performance, without negatively affecting aerobic endurance. Rugby union football players were randomized to receive 0.1 g.kg(-1).d(-1) creatine monohydrate (n=9) or placebo (n=9) during 8 weeks of the rugby season. Players practiced twice per week for approximately 2 h per session and played one 80 min game per week. Before and after the 8 weeks, players were measured for body composition (air displacement plethysmography), muscular endurance (number of repetitions at 75% of one repetition maximum (1 RM) for bench press and leg press), and aerobic endurance (Leger shuttle-run test with 1 min stages of progressively increasing speed). There were time main effects for body mass (-0.7+/-0.4 kg; p=0.05), fat mass (-1.9+/-0.8 kg; p<0.05), and a trend for an increase in lean tissue mass (+1.2+/-0.5 kg; p=0.07), with no differences between groups. The group receiving creatine supplementation had a greater increase in the number of repetitions for combined bench press and leg press tests compared with the placebo group (+5.8+/-1.4 vs. +0.9+/-2.0 repetitions; p<0.05). There were no changes in either group for aerobic endurance. Creatine supplementation during a rugby union football season is effective for increasing muscular endurance, but has no effect on body composition or aerobic endurance.
Effect of exercise training on saliva brain derived neurotrophic factor, catalase and vitamin c.
Babaei, Parvin; Damirchi, Arsalan; Soltani Tehrani, Bahram; Nazari, Yazgaldi; Sariri, Reyhaneh; Hoseini, Rastegar
2016-01-01
Background: The balance between production of Reactive Oxygen Species (ROS) and antioxidant defense in the body has important health implications. The aim of this study was to investigate the changes in salivary antioxidants: catalase, vitamin C and brain-derived neurotrophic factor (BDNF), in sedentary men at rest and after acute exhaustive exercise. Methods: This randomized controlled clinical trial (The registry code IRCT2011053212431N1) recruited twenty-five sedentary men (age=21±3yrs; height=172±8cm; weight=66±9kg; VO2 max=37.6±7.4mL•kgkg -1 •min -1 ) participated in a double-blind randomized experiment. Unstimulated whole saliva samples were collected before, immediately and 1 hour after exhaustive treadmill running. Catalase, vitamin C (Vit C) concentration, and BDNF concentrations were determined using biochemical assays and ELISA respectively. Repeated measures ANOVA and Bonferroni posthoc test were used to analyze data. Results: The results of the present study showed that an acute intensive exercise causes a reduction in salivary catalase, Vit C and also BDNF concentration (p<0.05) compared with pre-exercise. Both catalase and Vit C showed a tendency to return to pre-exercise value after one hour. However, BDNF continued to reduction at least 1 hour after the ending of the training. Conclusion: Reduction in antioxidants capacity of saliva might reflects disturbance in natural antioxidant defense mechanisms of the body after an acute intensive physical stress and possible further health threatening consequences.
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the User's Guide, and describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
TIGER: Turbomachinery interactive grid generation
NASA Technical Reports Server (NTRS)
Soni, Bharat K.; Shih, Ming-Hsin; Janus, J. Mark
1992-01-01
A three dimensional, interactive grid generation code, TIGER, is being developed for analysis of flows around ducted or unducted propellers. TIGER is a customized grid generator that combines new technology with methods from general grid generation codes. The code generates multiple block, structured grids around multiple blade rows with a hub and shroud for either C grid or H grid topologies. The code is intended for use with a Euler/Navier-Stokes solver also being developed, but is general enough for use with other flow solvers. TIGER features a silicon graphics interactive graphics environment that displays a pop-up window, graphics window, and text window. The geometry is read as a discrete set of points with options for several industrial standard formats and NASA standard formats. Various splines are available for defining the surface geometries. Grid generation is done either interactively or through a batch mode operation using history files from a previously generated grid. The batch mode operation can be done either with a graphical display of the interactive session or with no graphics so that the code can be run on another computer system. Run time can be significantly reduced by running on a Cray-YMP.
Accurate Treatment of Collision and Water-Delivery in Models of Terrestrial Planet Formation
NASA Astrophysics Data System (ADS)
Haghighipour, N.; Maindl, T. I.; Schaefer, C. M.; Wandel, O.
2017-08-01
We have developed a comprehensive approach in simulating collisions and growth of embryos to terrestrial planets where we use a combination of SPH and N-body codes to model collisions and the transfer of water and chemical compounds accurately.
Parallel CARLOS-3D code development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putnam, J.M.; Kotulski, J.D.
1996-02-01
CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions tomore » the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.« less
Transient dynamics capability at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.
1993-01-01
A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.
The Effects of Backwards Running Training on Forward Running Economy in Trained Males.
Ordway, Jason D; Laubach, Lloyd L; Vanderburgh, Paul M; Jackson, Kurt J
2016-03-01
Backwards running (BR) results in greater cardiopulmonary response and muscle activity compared with forward running (FR). BR has traditionally been used in rehabilitation for disorders such as stroke and lower leg extremity injuries, as well as in short bursts during various athletic events. The aim of this study was to measure the effects of sustained backwards running training on forward running economy in trained male athletes. Eight highly trained, male runners (26.13 ± 6.11 years, 174.7 ± 6.4 cm, 68.4 ± 9.24 kg, 8.61 ± 3.21% body fat, 71.40 ± 7.31 ml·kg(-1)·min(-1)) trained with BR while harnessed on a treadmill at 161 m·min(-1) for 5 weeks following a 5-week BR run-in period at a lower speed (134 m·min(-1)). Subjects were tested at baseline, postfamiliarized, and post-BR training for body composition, a ramped VO2max test, and an economy test designed for trained male runners. Subjects improved forward running economy by 2.54% (1.19 ± 1.26 ml·kg(-1)·min(-1), p = 0.032) at 215 m·min(-1). VO2max, body mass, lean mass, fat mass, and % body fat did not change (p > 0.05). Five weeks of BR training improved FR economy in healthy, trained male runners without altering VO2max or body composition. The improvements observed in this study could be a beneficial form of training to an already economical population to improve running economy.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Western diet increases wheel running in mice selectively bred for high voluntary wheel running.
Meek, T H; Eisenmann, J C; Garland, T
2010-06-01
Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.
Evangelista, Fabiana S; Muller, Cynthia R; Stefano, Jose T; Torres, Mariana M; Muntanelli, Bruna R; Simon, Daniel; Alvares-da-Silva, Mario R; Pereira, Isabel V; Cogliati, Bruno; Carrilho, Flair J; Oliveira, Claudia P
2015-01-01
This study sought to determine the role of physical training (PT) on body weight (BW), energy balance, histological markers of nonalcoholic fatty liver disease (NAFLD) and metabolic gene expression in the liver of ob/ob mice. Adult male ob/ob mice were assigned into groups sedentary (S; n = 8) and trained (T; n = 9). PT consisted in running sessions of 60 min at 60% of maximal speed conducted five days per week for eight weeks. BW of S group was higher from the 4(th) to 8(th) week of PT compared to their own BW at the beginning of the experiment. PT decreased daily food intake and increased resting oxygen consumption and energy expenditure in T group. No difference was observed in respiratory exchange ratio, but the rates of carbohydrate and lipids oxidation, and maximal running capacity were greater in T than S group. Both groups showed liver steatosis but not inflammation. PT increased CPT1a and SREBP1c mRNA expression in T group, but did not change MTP, PPAR-α, PPAR-γ, and NFKB mRNA expression. In conclusion, PT prevented body weight gain in ob/ob mice by inducing negative energy balance and increased physical exercise tolerance. However, PT did not change inflammatory gene expression and failed to prevent liver steatosis possible due to an upregulation in the expression of SREBP1c transcription factor. These findings reveal that PT has positive effect on body weight control but not in the liver steatosis in a leptin deficiency condition.
Development of Web Interfaces for Analysis Codes
NASA Astrophysics Data System (ADS)
Emoto, M.; Watanabe, T.; Funaba, H.; Murakami, S.; Nagayama, Y.; Kawahata, K.
Several codes have been developed to analyze plasma physics. However, most of them are developed to run on supercomputers. Therefore, users who typically use personal computers (PCs) find it difficult to use these codes. In order to facilitate the widespread use of these codes, a user-friendly interface is required. The authors propose Web interfaces for these codes. To demonstrate the usefulness of this approach, the authors developed Web interfaces for two analysis codes. One of them is for FIT developed by Murakami. This code is used to analyze the NBI heat deposition, etc. Because it requires electron density profiles, electron temperatures, and ion temperatures as polynomial expressions, those unfamiliar with the experiments find it difficult to use this code, especially visitors from other institutes. The second one is for visualizing the lines of force in the LHD (large helical device) developed by Watanabe. This code is used to analyze the interference caused by the lines of force resulting from the various structures installed in the vacuum vessel of the LHD. This code runs on PCs; however, it requires that the necessary parameters be edited manually. Using these Web interfaces, users can execute these codes interactively.
NASA Astrophysics Data System (ADS)
Kral, Q.; Thebault, P.; Charnoz, S.
2014-01-01
The first attempt at developing a fully self-consistent code coupling dynamics and collisions to study debris discs (Kral et al. 2013) is presented. So far, these two crucial mechanisms were studied separately, with N-body and statistical collisional codes respectively, because of stringent computational constraints. We present a new model named LIDT-DD which is able to follow over long timescales the coupled evolution of dynamics (including radiation forces) and collisions in a self-consistent way.
LUNAR ACCRETION FROM A ROCHE-INTERIOR FLUID DISK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salmon, Julien; Canup, Robin M., E-mail: julien@boulder.swri.edu, E-mail: robin@boulder.swri.edu
2012-11-20
We use a hybrid numerical approach to simulate the formation of the Moon from an impact-generated disk, consisting of a fluid model for the disk inside the Roche limit and an N-body code to describe accretion outside the Roche limit. As the inner disk spreads due to a thermally regulated viscosity, material is delivered across the Roche limit and accretes into moonlets that are added to the N-body simulation. Contrary to an accretion timescale of a few months obtained with prior pure N-body codes, here the final stage of the Moon's growth is controlled by the slow spreading of themore » inner disk, resulting in a total lunar accretion timescale of {approx}10{sup 2} years. It has been proposed that the inner disk may compositionally equilibrate with the Earth through diffusive mixing, which offers a potential explanation for the identical oxygen isotope compositions of the Earth and Moon. However, the mass fraction of the final Moon that is derived from the inner disk is limited by resonant torques between the disk and exterior growing moons. For initial disks containing <2.5 lunar masses (M{sub Last-Quarter-Moon }), we find that a final Moon with mass > 0.8 M{sub Last-Quarter-Moon} contains {<=}60% material derived from the inner disk, with this material preferentially delivered to the Moon at the end of its accretion.« less
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola; ...
2017-05-01
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System
NASA Technical Reports Server (NTRS)
Boyer, R. S.; Moore, J. S.
1983-01-01
The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.; Radnović, Nikola
In this companion paper to "Algorithmic Choices in WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs" (doi:10.1016/j.anucene.2014.10.039), the WARP Monte Carlo neutron transport framework for graphics processing units (GPUs) is benchmarked against production-level central processing unit (CPU) Monte Carlo neutron transport codes for both performance and accuracy. We compare neutron flux spectra, multiplication factors, runtimes, speedup factors, and costs of various GPU and CPU platforms running either WARP, Serpent 2.1.24, or MCNP 6.1. WARP compares well with the results of the production-level codes, and it is shown that on the newestmore » hardware considered, GPU platforms running WARP are between 0.8 to 7.6 times as fast as CPU platforms running production codes. Also, the GPU platforms running WARP were between 15% and 50% as expensive to purchase and between 80% to 90% as expensive to operate as equivalent CPU platforms performing at an equal simulation rate.« less
ERIC Educational Resources Information Center
Knechtle, Beat; Wirth, Andrea; Knechtle, Patrizia; Rosemann, Thomas
2009-01-01
We investigated whether ultraendurance runners in a 100-km run suffer a decrease of body mass and whether this loss consists of fat mass, skeletal muscle mass, or total body water. Male ultrarunners were measured pre- and postrace to determine body mass, fat mass, and skeletal muscle mass by using the anthropometric method. In addition,…
Jiao, Jiao; Li, Yi; Yao, Lei; Chen, Yajun; Guo, Yueping; Wong, Stephen H S; Ng, Frency S F; Hu, Junyan
2017-10-01
To investigate clothing-induced differences in human thermal response and running performance, eight male athletes participated in a repeated-measure study by wearing three sets of clothing (CloA, CloB, and CloC). CloA and CloB were body-mapping-designed with 11% and 7% increased capacity of heat dissipation respectively than CloC, the commonly used running clothing. The experiments were conducted by using steady-state running followed by an all-out performance running in a controlled hot environment. Participants' thermal responses such as core temperature (T c ), mean skin temperature ([Formula: see text]), heat storage (S), and the performance running time were measured. CloA resulted in shorter performance time than CloC (323.1 ± 10.4 s vs. 353.6 ± 13.2 s, p = 0.01), and induced the lowest [Formula: see text], smallest ΔT c , and smallest S in the resting and running phases. This study indicated that clothing made with different heat dissipation capacities affects athlete thermal responses and running performance in a hot environment. Practitioner Summary: A protocol that simulated the real situation in running competitions was used to investigate the effects of body-mapping-designed clothing on athletes' thermal responses and running performance. The findings confirmed the effects of optimised clothing with body-mapping design and advanced fabrics, and ensured the practical advantage of developed clothing on exercise performance.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
CUDA Fortran acceleration for the finite-difference time-domain method
NASA Astrophysics Data System (ADS)
Hadi, Mohammed F.; Esmaeili, Seyed A.
2013-05-01
A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Mendel-GPU: haplotyping and genotype imputation on graphics processing units
Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth
2012-01-01
Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633
Lower-body determinants of running economy in male and female distance runners.
Barnes, Kyle R; Mcguigan, Michael R; Kilding, Andrew E
2014-05-01
A variety of training approaches have been shown to improve running economy in well-trained athletes. However, there is a paucity of data exploring lower-body determinants that may affect running economy and account for differences that may exist between genders. Sixty-three male and female distance runners were assessed in the laboratory for a range of metabolic, biomechanical, and neuromuscular measures potentially related to running economy (ml·kg(-1)·min(-1)) at a range of running speeds. At all common test velocities, women were more economical than men (effect size [ES] = 0.40); however, when compared in terms of relative intensity, men had better running economy (ES = 2.41). Leg stiffness (r = -0.80) and moment arm length (r = 0.90) were large-extremely largely correlated with running economy and each other (r = -0.82). Correlations between running economy and kinetic measures (peak force, peak power, and time to peak force) for both genders were unclear. The relationship in stride rate (r = -0.27 to -0.31) was in the opposite direction to that of stride length (r = 0.32-0.49), and the relationship in contact time (r = -0.21 to -0.54) was opposite of that of flight time (r = 0.06-0.74). Although both leg stiffness and moment arm length are highly related to running economy, it seems that no single lower-body measure can completely explain differences in running economy between individuals or genders. Running economy is therefore likely determined from the sum of influences from multiple lower-body attributes.
A Vision of the Future: A School for Running.
ERIC Educational Resources Information Center
Spino, Mike
1979-01-01
Presents a vision of how a school of running could provide young people with learning experiences encompassing body and mind. The school would have four tracks: running, body work, inner space development, and academic subjects. Sea Pines Resort in South Carolina will be ideal for the kind of education described here. (Author/BEF)
Three-dimensional computer model for the atmospheric general circulation experiment
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Voluntary Running Attenuates Metabolic Dysfunction in Ovariectomized Low-Fit Rats
Park, Young-Min; Padilla, Jaume; Kanaley, Jill A.; Zidon, Terese; Welly, Rebecca J.; Britton, Steven L.; Koch, Lauren G.; Thyfault, John P.; Booth, Frank W.; Vieira-Potter, Victoria J.
2016-01-01
INTRODUCTION Ovariectomy and high fat diet (HFD) worsen obesity and metabolic dysfunction associated with low aerobic fitness. Exercise training mitigates metabolic abnormalities induced by low aerobic fitness, but whether the protective effect is maintained following ovariectomy and HFD is unknown. PURPOSE This study determined whether, following ovariectomy and HFD, exercise training improves metabolic function in rats bred for low intrinsic aerobic capacity. METHODS Female rats selectively bred for low (LCR) and high (HCR) intrinsic aerobic capacity (n=30) were ovariectomized, fed HFD, and randomized to either a sedentary (SED) or voluntary wheel running (EX) group. Resting energy expenditure, glucose tolerance, and spontaneous physical activity were determined midway through the experiment, while body weight, wheel running volume, and food intake were assessed throughout the study. Body composition, circulating metabolic markers, and skeletal muscle gene and protein expression was measured at sacrifice. RESULTS EX reduced body weight and adiposity in LCR rats (−10% and −50%, respectively; P<0.05) and, unexpectedly, increased these variables in HCR rats (+7% and +37%, respectively; P<0.05) compared to their respective SED controls, likely due to dietary overcompensation. Wheel running volume was ~5-fold greater in HCR than LCR rats, yet EX enhanced insulin sensitivity equally in LCR and HCR rats (P<0.05). This EX-mediated improvement in metabolic function was associated with gene up-regulation of skeletal muscle IL-6&-10. EX also increased resting energy expenditure, skeletal muscle mitochondrial content (oxidative phosphorylation complexes and citrate synthase activity), and AMPK activation similarly in both lines (all P <0.05). CONCLUSION Despite a 5-fold difference in running volume between rat lines, EX similarly improved systemic insulin sensitivity, resting energy expenditure, and skeletal muscle mitochondrial content and AMPK activation in ovariectomized LCR and HCR rats fed HFD compared to their respective SED controls. PMID:27669449
Compactified cosmological simulations of the infinite universe
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-06-01
We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.
Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2011-08-01
In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.
Saxena, Amol; Granot, Allison
2011-01-01
Achilles surgical patients were evaluated using an "anti-gravity" Alter-G (AG) treadmill that allows for reduction of weightbearing pressure on the lower extremity. We studied our hypothesis, which was based on our prior clinical findings, that being able to run on the AG treadmill at 85% of body weight is sufficient to clear patients to run with full body weight outside. Patients undergoing Achilles tendon rupture or insertional repair surgery were prospectively studied. They were compared with a control group that had similar surgeries and a similar rehabilitation program during the same time period: the variable was not using the AG treadmill. The criteria for the study group to be allowed to run outside was being able to run for at least 10 minutes on the AG at 85% of body weight. Each group had 8 patients who underwent surgery for 2 complete tendon ruptures and 6 insertional repairs. There was no significant difference between the AG and control group as to age and postoperative follow-up. AG patients began their initial run on the treadmill at 70% of their body weight at 13.9 ± 3.4 weeks, 85% at 17.6 ± 3.9 weeks, and outside running at 18.1 ± 3.9 weeks. The control group's return to running outside time was 20.4 ± 4.1 weeks. This was not significantly different (p = .27). We confirmed our hypothesis that being able to run at 85% of body weight after Achilles surgery was sufficient to clear patients to run outside. Copyright © 2011 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Factorization in large-scale many-body calculations
Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.
2013-08-07
One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less
Medical Sequencing at the extremes of Human Body Mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahituv, Nadav; Kavaslar, Nihan; Schackwitz, Wendy
2006-09-01
Body weight is a quantitative trait with significantheritability in humans. To identify potential genetic contributors tothis phenotype, we resequenced the coding exons and splice junctions of58 genes in 379 obese and 378 lean individuals. Our 96Mb survey included21 genes associated with monogenic forms of obesity in humans or mice, aswell as 37 genes that function in body weight-related pathways. We foundthat the monogenic obesity-associated gene group was enriched for rarenonsynonymous variants unique to the obese (n=46) versus lean (n=26)populations. Computational analysis further predicted a significantlygreater fraction of deleterious variants within the obese cohort.Consistent with the complex inheritance of body weight,more » we did notobserve obvious familial segregation in the majority of the 28 availablekindreds. Taken together, these data suggest that multiple rare alleleswith variable penetrance contribute to obesity in the population andprovide a deep medical sequencing based approach to detectthem.« less
Wone, Bernard W M; Yim, Won C; Schutz, Heidi; Meek, Thomas H; Garland, Theodore
2018-04-04
Mitochondrial haplotypes have been associated with human and rodent phenotypes, including nonshivering thermogenesis capacity, learning capability, and disease risk. Although the mammalian mitochondrial D-loop is highly polymorphic, D-loops in laboratory mice are identical, and variation occurs elsewhere mainly between nucleotides 9820 and 9830. Part of this region codes for the tRNA Arg gene and is associated with mitochondrial densities and number of mtDNA copies. We hypothesized that the capacity for high levels of voluntary wheel-running behavior would be associated with mitochondrial haplotype. Here, we analyzed the mtDNA polymorphic region in mice from each of four replicate lines selectively bred for 54 generations for high voluntary wheel running (HR) and from four control lines (Control) randomly bred for 54 generations. Sequencing the polymorphic region revealed a variable number of adenine repeats. Single nucleotide polymorphisms (SNPs) varied from 2 to 3 adenine insertions, resulting in three haplotypes. We found significant genetic differentiations between the HR and Control groups (F st = 0.779, p ≤ 0.0001), as well as among the replicate lines of mice within groups (F sc = 0.757, p ≤ 0.0001). Haplotypes, however, were not strongly associated with voluntary wheel running (revolutions run per day), nor with either body mass or litter size. This system provides a useful experimental model to dissect the physiological processes linking mitochondrial, genomic SNPs, epigenetics, or nuclear-mitochondrial cross-talk to exercise activity. Copyright © 2018. Published by Elsevier B.V.
Fredriksen, Per Morten; Mamen, Asgeir; Gammelsrud, Heidi; Lindberg, Morten; Hjelle, Ole Petter
2018-05-01
The purpose of this study was to examine factors affecting running performance in children. A cross-sectional study exploring the relationships between height, weight, waist circumference, muscle mass, body fat percentage, relevant biomarkers, and the Andersen intermittent running test in 2272 children aged 6 to 12 years. Parental education level was used as a non-physiological explanatory variable. Mean values (SD) and percentiles are presented as reference values. Height (β = 6.4, p < .0001), high values of haemoglobin (β = 18, p = .013) and low percentage of body fat (β = -7.5, p < .0001) showed an association with results from the running test. In addition, high parental education level showed a positive association with the running test. Boys display better running performance than girls at all age ages, except 7 years old, probably because of additional muscle mass and less fatty tissue. Height and increased level of haemoglobin positively affected running performance. Lower body fat percentage and high parental education level correlated with better running performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sienicki, J.J.
A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executedmore » on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.« less
mocca code for star cluster simulations - VI. Bimodal spatial distribution of blue stragglers
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz; Giersz, Mirek
2017-11-01
The paper presents an analysis of formation mechanism and properties of spatial distributions of blue stragglers in evolving globular clusters, based on numerical simulations done with the mocca code. First, there are presented N-body and mocca simulations which try to reproduce the simulations presented by Ferraro et al. (2012). Then, we show the agreement between N-body and the mocca code. Finally, we discuss the formation process of the bimodal distribution. We report that we could not reproduce simulations from Ferraro et al. (2012). Moreover, we show that the so-called bimodal spatial distribution of blue stragglers is a very transient feature. It is formed for one snapshot in time and it can easily vanish in the next one. Moreover, we show that the radius of avoidance proposed by Ferraro et al. (2012) goes out of sync with the apparent minimum of the bimodal distribution after about two half-mass relaxation times (without finding out what is the reason for that). This finding creates a real challenge for the dynamical clock, which uses this radius to determine the dynamical age of globular clusters. Additionally, the paper discusses a few important problems concerning the apparent visibilities of the bimodal distributions, which have to be taken into account while studying the spatial distributions of blue stragglers.
BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinberg, Elad; Yalinewich, Almog; Sari, Re'em
2015-01-01
One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety ofmore » existing hydrodynamical codes.« less
Lee, Min Chul; Inoue, Koshiro; Okamoto, Masahiro; Liu, Yu Fan; Matsui, Takashi; Yook, Jang Soo; Soya, Hideaki
2013-03-14
Recently, we reported that voluntary resistance wheel running with a resistance of 30% of body weight (RWR), which produces shorter distances but higher work levels, enhances spatial memory associated with hippocampal brain-derived neurotrophic factor (BDNF) signaling compared to wheel running without a load (WR) [17]. We thus hypothesized that RWR promotes adult hippocampal neurogenesis (AHN) as a neuronal substrate underlying this memory improvement. Here we used 10-week-old male Wistar rats divided randomly into sedentary (Sed), WR, and RWR groups. All rats were injected intraperitoneally with the thymidine analogue 5-Bromo-2'-deoxuridine (BrdU) for 3 consecutive days before wheel running. We found that even when the average running distance decreased by about half, the average work levels significantly increased in the RWR group, which caused muscular adaptation (oxidative capacity) for fast-twitch plantaris muscle without causing any negative stress effects. Additionally, immunohistochemistry revealed that the total BrdU-positive cells and newborn mature cells (BrdU/NeuN double-positive) in the dentate gyrus increased in both the WR and RWR groups. These results provide new evidence that RWR has beneficial effects on AHN comparable to WR, even with short running distances. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Chaouachi, Anis; Othman, Aymen Ben; Hammami, Raouf; Drinkwater, Eric J; Behm, David G
2014-02-01
Because balance is not fully developed in children and studies have shown functional improvements with balance only training studies, a combination of plyometric and balance activities might enhance static balance, dynamic balance, and power. The objective of this study was to compare the effectiveness of plyometric only (PLYO) with balance and plyometric (COMBINED) training on balance and power measures in children. Before and after an 8-week training period, testing assessed lower-body strength (1 repetition maximum leg press), power (horizontal and vertical jumps, triple hop for distance, reactive strength, and leg stiffness), running speed (10-m and 30-m sprint), static and dynamic balance (Standing Stork Test and Star Excursion Balance Test), and agility (shuttle run). Subjects were randomly divided into 2 training groups (PLYO [n = 14] and COMBINED [n = 14]) and a control group (n = 12). Results based on magnitude-based inferences and precision of estimation indicated that the COMBINED training group was considered likely to be superior to the PLYO group in leg stiffness (d = 0.69, 91% likely), 10-m sprint (d = 0.57, 84% likely), and shuttle run (d = 0.52, 80% likely). The difference between the groups was unclear in 8 of the 11 dependent variables. COMBINED training enhanced activities such as 10-m sprints and shuttle runs to a greater degree. COMBINED training could be an important consideration for reducing the high velocity impacts of PLYO training. This reduction in stretch-shortening cycle stress on neuromuscular system with the replacement of balance and landing exercises might help to alleviate the overtraining effects of excessive repetitive high load activities.
Zillmann, Teresa; Knechtle, Beat; Rüst, Christoph Alexander; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2013-06-30
Participation in endurance running such as half-marathon (21-km) and marathon (42-km) has increased over the last decades. We compared 147 recreational male half-marathoners and 126 recreational male marathoners to investigate similarities or differences in their anthropometric and training characteristics. The half-marathoners were heavier (P < 0.05), had longer legs (P < 0.001), thicker upper arms (P < 0.05), a thicker thigh (P < 0.01), a higher sum of skinfold thicknesses (P < 0.01), a higher body fat percentage (P < 0.05) and a higher skeletal muscle mass (P < 0.05) than the marathoners. They had fewer years of experience (P < 0.05), completed fewer weekly training kilometers (P < 0.001), and fewer weekly running hours (P < 0.01) compared to the marathoners. For half-marathoners, body mass index (P = 0.011), percent body fat (P = 0.036) and speed in running during training (P < 0.0001) were related to race time (r2 = 0.47). For marathoners, percent body fat (P = 0.001) and speed in running during training (P < 0.0001) were associated to race time (r2 = 0.47). When body mass index was excluded for the half-marathoners in the multi-variate analysis, r2 decreased to 0.45, therefore body mass index explained only 2% of the variance of half-marathon performance. Percent body fat was significantly and negatively related to running speed during training in both groups. To summarize, half-marathoners showed differences in both anthropometry and training characteristics compared to marathoners that could be related to their lower training volume, most probably due to the shorter race distance they intended to compete. Both groups of athletes seemed to profit from low body fat and a high running speed during training for fast race times.
Constraints on the pre-impact orbits of Solar system giant impactors
NASA Astrophysics Data System (ADS)
Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.
2018-03-01
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
Vikmoen, Olav; Rønnestad, Bent R; Ellefsen, Stian; Raastad, Truls
2017-03-01
The purpose of this study was to investigate the effects of adding heavy strength training to female duathletes' normal endurance training on both cycling and running performance. Nineteen well-trained female duathletes ( V O 2max cycling: 54 ± 3 ml∙kg -1 ∙min -1 , VO 2max running: 53 ± 3 ml∙kg -1 ∙min -1 ) were randomly assigned to either normal endurance training ( E , n = 8) or normal endurance training combined with strength training ( E+S , n = 11). The strength training consisted of four lower body exercises [3 × 4-10 repetition maximum (RM)] twice a week for 11 weeks. Running and cycling performance were assessed using 5-min all-out tests, performed immediately after prolonged periods of submaximal work (3 h cycling or 1.5 h running). E+S increased 1RM in half squat (45 ± 22%) and lean mass in the legs (3.1 ± 4.0%) more than E Performance during the 5-min all-out test increased in both cycling (7.0 ± 4.5%) and running (4.7 ± 6.0%) in E+S, whereas no changes occurred in E The changes in running performance were different between groups. E+S reduced oxygen consumption and heart rate during the final 2 h of prolonged cycling, whereas no changes occurred in E No changes occurred during the prolonged running in any group. Adding strength training to normal endurance training in well-trained female duathletes improved both running and cycling performance when tested immediately after prolonged submaximal work. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
2011-08-01
4 Body ...Report requirement. 5 Body The approved Statement of Work proposed the following timeline (Table 1): Table 1. Timeline for approved project...Figure 1) were tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah ® (Ossur) and Nitro Running Foot (Freedom
ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations
NASA Astrophysics Data System (ADS)
Freitag, Marc Dewi
2013-02-01
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Iowa State University – Final Report for SciDAC3/NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vary, James P
The Iowa State University (ISU) contributions to the NUCLEI project are focused on developing, implementing and running an efficient and scalable configuration interaction code (Many-Fermion Dynamics – nuclear or MFDn) for leadership class supercomputers addressing forefront research problems in low-energy nuclear physics. We investigate nuclear structure and reactions with realistic nucleon-nucleon (NN) and three-nucleon (3N) interactions. We select a few highlights from our work that has produced a total of more than 82 refereed publications and more than 109 invited talks under SciDAC3/NUCLEI.
Processes and Knowledge in Designing Instruction
1990-10-05
direction I think I’d go through is to explain the power source of each one. Say, "Okay, the first thing you need to do is you have to have a power source...to run the machine. You can use any one of the three power sources, be they the impulse purifier, the tablograph, or the vegetor.... [N1B, Episode 3...Determine Content. Words printed in boldface were coded as Determine Sequence.) So, we’d be starting off with a power source to each one of the units
Drag Prediction for the DLR-F6 Wing/Body and DPW Wing using CFL3D and OVERFLOW Overset Mesh
NASA Technical Reports Server (NTRS)
Sclanfani, Anthony J.; Vassberg, John C.; Harrison, Neal A.; DeHaan, Mark A.; Rumsey, Christopher L.; Rivers, S. Melissa; Morrison, Joseph H.
2007-01-01
A series of overset grids was generated in response to the 3rd AIAA CFD Drag Prediction Workshop (DPW-III) which preceded the 25th Applied Aerodynamics Conference in June 2006. DPW-III focused on accurate drag prediction for wing/body and wing-alone configurations. The grid series built for each configuration consists of a coarse, medium, fine, and extra-fine mesh. The medium mesh is first constructed using the current state of best practices for overset grid generation. The medium mesh is then coarsened and enhanced by applying a factor of 1.5 to each (I,J,K) dimension. The resulting set of parametrically equivalent grids increase in size by a factor of roughly 3.5 from one level to the next denser level. CFD simulations were performed on the overset grids using two different RANS flow solvers: CFL3D and OVERFLOW. The results were post-processed using Richardson extrapolation to approximate grid converged values of lift, drag, pitching moment, and angle-of-attack at the design condition. This technique appears to work well if the solution does not contain large regions of separated flow (similar to that seen n the DLR-F6 results) and appropriate grid densities are selected. The extra-fine grid data helped to establish asymptotic grid convergence for both the OVERFLOW FX2B wing/body results and the OVERFLOW DPW-W1/W2 wing-alone results. More CFL3D data is needed to establish grid convergence trends. The medium grid was utilized beyond the grid convergence study by running each configuration at several angles-of-attack so drag polars and lift/pitching moment curves could be evaluated. The alpha sweep results are used to compare data across configurations as well as across flow solvers. With the exception of the wing/body drag polar, the two codes compare well qualitatively showing consistent incremental trends and similar wing pressure comparisons.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
Additional extensions to the NASCAP computer code, volume 3
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Cooke, D. L.
1981-01-01
The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.
NASA Astrophysics Data System (ADS)
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
Predictors of fielding performance in professional baseball players.
Mangine, Gerald T; Hoffman, Jay R; Vazquez, Jose; Pichardo, Napoleon; Fragala, Maren S; Stout, Jeffrey R
2013-09-01
The ultimate zone-rating extrapolation (UZR/150) rates fielding performance by runs saved or cost within a zone of responsibility in comparison with the league average (150 games) for a position. Spring-training anthropometric and performance measures have been previously related to hitting performance; however, their relationships with fielding performance measures are unknown. To examine the relationship between anthropometric and performance measurements on fielding performance in professional baseball players. Body mass, lean body mass (LBM), grip strength, 10-yd sprint, proagility, and vertical-jump mean (VJMP) and peak power (VJPP) were collected during spring training over the course of 5 seasons (2007-2011) for professional corner infielders (CI; n = 17, fielding opportunities = 420.7 ± 307.1), middle infielders (MI; n = 14, fielding opportunities = 497.3 ± 259.1), and outfielders (OF; n = 16, fielding opportunities = 227.9 ± 70.9). The relationships between these data and regular-season (100-opportunity minimum) fielding statistics were examined using Pearson correlation coefficients, while stepwise regression identified the single best predictor of UZR/150. Significant correlations (P < .05) were observed between UZR/150 and body mass (r = .364), LBM (r = .396), VJPP (r = .397), and VJMP (r = .405). Of these variables, stepwise regression indicated VJMP (R = .405, SEE = 14.441, P = .005) as the single best predictor for all players, although the addition of proagility performance strengthened (R = .496, SEE = 13.865, P = .002) predictive ability by 8.3%. The best predictor for UZR/150 was body mass for CI (R = .519, SEE = 15.364, P = .033) and MI (R = .672, SEE = 12.331, P = .009), while proagility time was the best predictor for OF (R = .514, SEE = 8.850, P = .042). Spring-training measurements of VJMP and proagility time may predict the defensive run value of a player over the course of a professional baseball season.
Larsen, Malte Nejst; Nielsen, Claus Malta; Ørntoft, Christina; Randers, Morten Bredsgaard; Helge, Eva Wulff; Madsen, Mads; Manniche, Vibeke; Hansen, Lone; Hansen, Peter Riis; Bangsbo, Jens; Krustrup, Peter
2017-01-01
We investigated the exercise intensity and fitness effects of frequent school-based low-volume high-intensity training for 10 months in 8-10-year-old children. 239 Danish 3rd-grade school children from four schools were cluster-randomised into a control group (CON, n = 116) or two training groups performing either 5 × 12 min/wk small-sided football plus other ball games (SSG, n = 62) or interval running (IR, n = 61). Whole-body DXA scans, flamingo balance, standing long-jump, 20 m sprint, and Yo-Yo IR1 children's tests (YYIR1C) were performed before and after the intervention. Mean running velocity was higher ( p < 0.05) in SSG than in IR (0.88 ± 0.14 versus 0.63 ± 0.20 m/s), while more time ( p < 0.05) was spent in the highest player load zone (>2; 5.6 ± 3.4 versus 3.7 ± 3.4%) and highest HR zone (>90% HR max ; 12.4 ± 8.9 versus 8.4 ± 8.0%) in IR compared to SSG. After 10 months, no significant between-group differences were observed for YYIR1C performance and HR after 2 min of YYIR1C (HR submax ), but median-split analyses showed that HR submax was reduced ( p < 0.05) in both training groups compared to CON for those with the lowest aerobic fitness (SSG versus CON: 3.2% HR max [95% CI: 0.8-5.5]; IR versus CON: 2.6% HR max [95% CI: 1.1-5.2]). After 10 months, IR had improved ( p < 0.05) 20 m sprint performance (IR versus CON: 154 ms [95% CI: 61-241]). No between-group differences ( p > 0.05) were observed for whole-body or leg aBMD, lean mass, postural balance, or jump length. In conclusion, frequent low-volume ball games and interval running can be conducted over a full school year with high intensity rate but has limited positive fitness effects in 8-10-year-old children.
Nielsen, Claus Malta; Ørntoft, Christina; Randers, Morten Bredsgaard; Helge, Eva Wulff; Madsen, Mads; Manniche, Vibeke; Hansen, Lone; Bangsbo, Jens
2017-01-01
We investigated the exercise intensity and fitness effects of frequent school-based low-volume high-intensity training for 10 months in 8–10-year-old children. 239 Danish 3rd-grade school children from four schools were cluster-randomised into a control group (CON, n = 116) or two training groups performing either 5 × 12 min/wk small-sided football plus other ball games (SSG, n = 62) or interval running (IR, n = 61). Whole-body DXA scans, flamingo balance, standing long-jump, 20 m sprint, and Yo-Yo IR1 children's tests (YYIR1C) were performed before and after the intervention. Mean running velocity was higher (p < 0.05) in SSG than in IR (0.88 ± 0.14 versus 0.63 ± 0.20 m/s), while more time (p < 0.05) was spent in the highest player load zone (>2; 5.6 ± 3.4 versus 3.7 ± 3.4%) and highest HR zone (>90% HRmax; 12.4 ± 8.9 versus 8.4 ± 8.0%) in IR compared to SSG. After 10 months, no significant between-group differences were observed for YYIR1C performance and HR after 2 min of YYIR1C (HRsubmax), but median-split analyses showed that HRsubmax was reduced (p < 0.05) in both training groups compared to CON for those with the lowest aerobic fitness (SSG versus CON: 3.2% HRmax [95% CI: 0.8–5.5]; IR versus CON: 2.6% HRmax [95% CI: 1.1–5.2]). After 10 months, IR had improved (p < 0.05) 20 m sprint performance (IR versus CON: 154 ms [95% CI: 61–241]). No between-group differences (p > 0.05) were observed for whole-body or leg aBMD, lean mass, postural balance, or jump length. In conclusion, frequent low-volume ball games and interval running can be conducted over a full school year with high intensity rate but has limited positive fitness effects in 8–10-year-old children. PMID:28303248
NASA Astrophysics Data System (ADS)
Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon
2015-08-01
For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.
Homa, Lori D; Burger, Laura L; Cuttitta, Ashley J; Michele, Daniel E; Moenter, Suzanne M
2015-12-01
Prenatal androgen (PNA) exposure in mice produces a phenotype resembling lean polycystic ovary syndrome. We studied effects of voluntary exercise on metabolic and reproductive parameters in PNA vs vehicle (VEH)-treated mice. Mice (8 wk of age) were housed individually and estrous cycles monitored. At 10 weeks of age, mice were divided into groups (PNA, PNA-run, VEH, VEH-run, n = 8-9/group); those in the running groups received wheels allowing voluntary running. Unexpectedly, PNA mice ran less distance than VEH mice; ovariectomy eliminated this difference. In ovary-intact mice, there was no difference in glucose tolerance, lower limb muscle fiber types, weight, or body composition among groups after 16 weeks of running, although some mitochondrial proteins were mildly up-regulated by exercise in PNA mice. Before running, estrous cycles in PNA mice were disrupted with most days in diestrus. There was no change in cycles during weeks 1-6 of running (10-15 wk of age). In contrast, from weeks 11 to 16 of running, cycles in PNA mice improved with more days in proestrus and estrus and fewer in diestrus. PNA programs reduced voluntary exercise, perhaps mediated in part by ovarian secretions. Exercise without weight loss improved estrous cycles, which if translated could be important for fertility in and counseling of lean women with polycystic ovary syndrome.
NASA Astrophysics Data System (ADS)
Kral, Q.; Thébault, P.; Charnoz, S.
2013-10-01
Context. In most current debris disc models, the dynamical and the collisional evolutions are studied separately with N-body and statistical codes, respectively, because of stringent computational constraints. In particular, incorporating collisional effects (especially destructive collisions) into an N-body scheme has proven a very arduous task because of the exponential increase of particles it would imply. Aims: We present here LIDT-DD, the first code able to mix both approaches in a fully self-consistent way. Our aim is for it to be generic enough to be applied to any astrophysical case where we expect dynamics and collisions to be deeply interlocked with one another: planets in discs, violent massive breakups, destabilized planetesimal belts, bright exozodiacal discs, etc. Methods: The code takes its basic architecture from the LIDT3D algorithm for protoplanetary discs, but has been strongly modified and updated to handle the very constraining specificities of debris disc physics: high-velocity fragmenting collisions, radiation-pressure affected orbits, absence of gas that never relaxes initial conditions, etc. It has a 3D Lagrangian-Eulerian structure, where grains of a given size at a given location in a disc are grouped into super-particles or tracers whose orbits are evolved with an N-body code and whose mutual collisions are individually tracked and treated using a particle-in-a-box prescription designed to handle fragmenting impacts. To cope with the wide range of possible dynamics for same-sized particles at any given location in the disc, and in order not to lose important dynamical information, tracers are sorted and regrouped into dynamical families depending on their orbits. A complex reassignment routine that searches for redundant tracers in each family and reassignes them where they are needed, prevents the number of tracers from diverging. Results: The LIDT-DD code has been successfully tested on simplified cases for which robust results have been obtained in past studies: we retrieve the classical features of particle size distributions in unperturbed discs and the outer radial density profiles in ~r-1.5 outside narrow collisionally active rings as well as the depletion of small grains in dynamically cold discs. The potential of the new code is illustrated with the test case of the violent breakup of a massive planetesimal within a debris disc. Preliminary results show that we are able for the first time to quantify the timescale over which the signature of such massive break-ups can be detected. In addition to studying such violent transient events, the main potential future applications of the code are planet and disc interactions, and more generally, any configurations where dynamics and collisions are expected to be intricately connected.
Supersonic Retropropulsion Test 1853 in NASA LaRC Unitary Plan Wind Tunnel Test Section 2
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Rhode, Matthew N.
2014-01-01
A supersonic retropropulsion experiment was conducted in the Langley Research Center Unitary Plan Wind Tunnel Test Section 2 at Mach numbers of 2.4, 3.5, and 4.6. Intended as a code validation effort, this study used pretest computations to size and refine the model such that tunnel blockage and internal flow separations were minimized. A 5-in diameter 70 degree sphere-cone forebody, which can accommodate up to four 4:1 area ratio nozzles, followed by a 9.55 inches long cylindrical aft body was selected for this test after computational maturation. The primary measurements for this experiment were high spatial-density surface pressures. In addition, high speed schlieren video and internal pressures and temperatures were acquired. The test included parametric variations in the number of nozzles utilized, thrust coefficients (roughly 0 to 4), and angles of attack (-8 to 20 degrees). The run matrix was developed to also allow quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and systematic errors due to flowfield or model misalignments. To accommodate the uncertainty assessment, many runs and replicates were conducted with the model at various locations within the tunnel and with model roll angles of 0, 60, 120, and 180 degrees. This test report provides operational details of the experiment, contains a review of trends, and provides all schlieren and pressure results within appendices.
Particle-gas dynamics in the protoplanetary nebula
NASA Technical Reports Server (NTRS)
Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.
1991-01-01
In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.
Michael, Claudia; Rizzi, Andreas M
2015-02-27
Glycan reductive isotope labeling (GRIL) using (12)C6-/(13)C6-aniline as labeling reagent is reported with the aim of quantitative N-glycan fingerprinting. Porous graphitized carbon (PGC) as stationary phase in capillary scale HPLC coupled to electrospray mass spectrometry with time of flight analyzer was applied for the determination of labeled N-glycans released from glycoproteins. The main benefit of using stable isotope-coding in the context of comparative glycomics lies in the improved accuracy and precision of the quantitative analysis in combined samples and in the potential of correcting for structure-dependent incomplete enzymatic release of oligosaccharides when comparing identical target proteins. The method was validated with respect to mobile phase parameters, reproducibility, accuracy, linearity and limit of detection/quantification (LOD/LOQ) using test glycoproteins. It is shown that the developed method is capable of determining relative amounts of N-glycans (including isomers) comparing two samples in one single HPLC-MS run. The analytical potential and usefulness of GRIL in combination with PGC-ESI-TOF-MS is demonstrated comparing glycosylation in human monoclonal antibodies produced in Chinese hamster ovary cells (CHO) and hybridoma cell lines. Copyright © 2015 Elsevier B.V. All rights reserved.
Postcollapse Evolution of Globular Clusters
NASA Astrophysics Data System (ADS)
Makino, Junichiro
1996-11-01
A number of globular clusters appear to have undergone core collapse, in the sense that their predicted collapse times are much shorter than their current ages. Simulations with gas models and the Fokker-Planck approximation have shown that the central density of a globular cluster after the collapse undergoes nonlinear oscillation with a large amplitude (gravothermal oscillation). However, the question whether such an oscillation actually takes place in real N-body systems has remained unsolved because an N-body simulation with a sufficiently high resolution would have required computing resources of the order of several GFLOPS-yr. In the present paper, we report the results of such a simulation performed on a dedicated special-purpose computer, GRAPE-4. We have simulated the evolution of isolated point-mass systems with up to 32,768 particles. The largest number of particles reported previously is 10,000. We confirm that gravothermal oscillation takes place in an N-body system. The expansion phase shows all the signatures that are considered to be evidence of the gravothermal nature of the oscillation. At the maximum expansion, the core radius is ˜1% of the half-mass radius for the run with 32,768 particles. The maximum core size, rc, depends on N as
Darrall-Jones, Joshua D; Jones, Ben; Till, Kevin
2016-05-01
The purpose of this study was to evaluate the anthropometric, sprint, and high-intensity running profiles of English academy rugby union players by playing positions, and to investigate the relationships between anthropometric, sprint, and high-intensity running characteristics. Data were collected from 67 academy players after the off-season period and consisted of anthropometric (height, body mass, sum of 8 skinfolds [∑SF]), 40-m linear sprint (5-, 10-, 20-, and 40-m splits), the Yo-Yo intermittent recovery test level 1 (Yo-Yo IRTL-1), and the 30-15 intermittent fitness test (30-15 IFT). Forwards displayed greater stature, body mass, and ∑SF; sprint times and sprint momentum, with lower high-intensity running ability and sprint velocities than backs. Comparisons between age categories demonstrated body mass and sprint momentum to have the largest differences at consecutive age categories for forwards and backs; whereas 20-40-m sprint velocity was discriminate for forwards between under 16s, 18s, and 21s. Relationships between anthropometric, sprint velocity, momentum, and high-intensity running ability demonstrated body mass to negatively impact on sprint velocity (10 m; r = -0.34 to -0.46) and positively affect sprint momentum (e.g., 5 m; r = 0.85-0.93), with large to very large negative relationships with the Yo-Yo IRTL-1 (r = -0.65 to -0.74) and 30-15 IFT (r = -0.59 to -0.79). These findings suggest that there are distinct anthropometric, sprint, and high-intensity running ability differences between and within positions in junior rugby union players. The development of sprint and high-intensity running ability may be impacted by continued increases in body mass as there seems to be a trade-off between momentum, velocity, and the ability to complete high-intensity running.
Nyx: Adaptive mesh, massively-parallel, cosmological simulation code
NASA Astrophysics Data System (ADS)
Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun
2017-12-01
Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.
Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.
ERIC Educational Resources Information Center
Jefferson, Galeano Martínez; Ciro, Parra Moreno; Méndez Sánchez, Maria Andrea
2017-01-01
The Bogotá River is one of the most contaminated bodies of water in Colombia and in the world. It originates in Guacheneque Páramo (Villapinzón, Cundinamarca) in the centre of the country and runs 336 kilometres before joining the Magdalena River. Along its course, the river receives the sewage of approximately 20.9% of Columbia's population. The…
NASA Astrophysics Data System (ADS)
Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.
2018-04-01
Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.
Effects of velocity and weight support on ground reaction forces and metabolic power during running.
Grabowski, Alena M; Kram, Rodger
2008-08-01
The biomechanical and metabolic demands of human running are distinctly affected by velocity and body weight. As runners increase velocity, ground reaction forces (GRF) increase, which may increase the risk of an overuse injury, and more metabolic power is required to produce greater rates of muscular force generation. Running with weight support attenuates GRFs, but demands less metabolic power than normal weight running. We used a recently developed device (G-trainer) that uses positive air pressure around the lower body to support body weight during treadmill running. Our scientific goal was to quantify the separate and combined effects of running velocity and weight support on GRFs and metabolic power. After obtaining this basic data set, we identified velocity and weight support combinations that resulted in different peak GRFs, yet demanded the same metabolic power. Ideal combinations of velocity and weight could potentially reduce biomechanical risks by attenuating peak GRFs while maintaining aerobic and neuromuscular benefits. Indeed, we found many combinations that decreased peak vertical GRFs yet demanded the same metabolic power as running slower at normal weight. This approach of manipulating velocity and weight during running may prove effective as a training and/or rehabilitation strategy.
Ferrauti, Alexander; Bergermann, Matthias; Fernandez-Fernandez, Jaime
2010-10-01
The purpose of this study was to investigate the effects of a concurrent strength and endurance training program on running performance and running economy of middle-aged runners during their marathon preparation. Twenty-two (8 women and 14 men) recreational runners (mean ± SD: age 40.0 ± 11.7 years; body mass index 22.6 ± 2.1 kg·m⁻²) were separated into 2 groups (n = 11; combined endurance running and strength training program [ES]: 9 men, 2 women and endurance running [E]: 7 men, and 4 women). Both completed an 8-week intervention period that consisted of either endurance training (E: 276 ± 108 minute running per week) or a combined endurance and strength training program (ES: 240 ± 121-minute running plus 2 strength training sessions per week [120 minutes]). Strength training was focused on trunk (strength endurance program) and leg muscles (high-intensity program). Before and after the intervention, subjects completed an incremental treadmill run and maximal isometric strength tests. The initial values for VO2peak (ES: 52.0 ± 6.1 vs. E: 51.1 ± 7.5 ml·kg⁻¹·min⁻¹) and anaerobic threshold (ES: 3.5 ± 0.4 vs. E: 3.4 ± 0.5 m·s⁻¹) were identical in both groups. A significant time × intervention effect was found for maximal isometric force of knee extension (ES: from 4.6 ± 1.4 to 6.2 ± 1.0 N·kg⁻¹, p < 0.01), whereas no changes in body mass occurred. No significant differences between the groups and no significant interaction (time × intervention) were found for VO2 (absolute and relative to VO2peak) at defined marathon running velocities (2.4 and 2.8 m·s⁻¹) and submaximal blood lactate thresholds (2.0, 3.0, and 4.0 mmol·L⁻¹). Stride length and stride frequency also remained unchanged. The results suggest no benefits of an 8-week concurrent strength training for running economy and coordination of recreational marathon runners despite a clear improvement in leg strength, maybe because of an insufficient sample size or a short intervention period.
Voluntary Running Aids to Maintain High Body Temperature in Rats Bred for High Aerobic Capacity
Karvinen, Sira M.; Silvennoinen, Mika; Ma, Hongqiang; Törmäkangas, Timo; Rantalainen, Timo; Rinnankoski-Tuikka, Rita; Lensu, Sanna; Koch, Lauren G.; Britton, Steven L.; Kainulainen, Heikki
2016-01-01
The production of heat, i.e., thermogenesis, is a significant component of the metabolic rate, which in turn affects weight gain and health. Thermogenesis is linked to physical activity (PA) level. However, it is not known whether intrinsic exercise capacity, aging, and long-term voluntary running affect core body temperature. Here we use rat models selectively bred to differ in maximal treadmill endurance running capacity (Low capacity runners, LCR and High capacity Runners, HCR), that as adults are divergent for aerobic exercise capacity, aging, and metabolic disease risk to study the connection between PA and body temperature. Ten high capacity runner (HCR) and ten low capacity runner (LCR) female rats were studied between 9 and 21 months of age. Rectal body temperature of HCR and LCR rats was measured before and after 1-year voluntary running/control intervention to explore the effects of aging and PA. Also, we determined whether injected glucose and spontaneous activity affect the body temperature differently between LCR and HCR rats at 9 vs. 21 months of age. HCRs had on average 1.3°C higher body temperature than LCRs (p < 0.001). Aging decreased the body temperature level of HCRs to similar levels with LCRs. The opportunity to run voluntarily had a significant impact on the body temperature of HCRs (p < 0.001) allowing them to maintain body temperature at a similar level as when at younger age. Compared to LCRs, HCRs were spontaneously more active, had higher relative gastrocnemius muscle mass and higher UCP2, PGC-1α, cyt c, and OXPHOS levels in the skeletal muscle (p < 0.050). These results suggest that higher PA level together with greater relative muscle mass and higher mitochondrial content/function contribute to the accumulation of heat in the HCRs. Interestingly, neither aging nor voluntary training had a significant impact on core body temperature of LCRs. However, glucose injection resulted in a lowering of the body temperature of LCRs (p < 0.050), but not that of HCRs. In conclusion, rats born with high intrinsic capacity for aerobic exercise and better health have higher body temperature compared to rats born with low exercise capacity and disease risk. Voluntary running allowed HCRs to maintain high body temperature during aging, which suggests that high PA level was crucial in maintaining the high body temperature of HCRs. PMID:27504097
Voluntary Running Aids to Maintain High Body Temperature in Rats Bred for High Aerobic Capacity.
Karvinen, Sira M; Silvennoinen, Mika; Ma, Hongqiang; Törmäkangas, Timo; Rantalainen, Timo; Rinnankoski-Tuikka, Rita; Lensu, Sanna; Koch, Lauren G; Britton, Steven L; Kainulainen, Heikki
2016-01-01
The production of heat, i.e., thermogenesis, is a significant component of the metabolic rate, which in turn affects weight gain and health. Thermogenesis is linked to physical activity (PA) level. However, it is not known whether intrinsic exercise capacity, aging, and long-term voluntary running affect core body temperature. Here we use rat models selectively bred to differ in maximal treadmill endurance running capacity (Low capacity runners, LCR and High capacity Runners, HCR), that as adults are divergent for aerobic exercise capacity, aging, and metabolic disease risk to study the connection between PA and body temperature. Ten high capacity runner (HCR) and ten low capacity runner (LCR) female rats were studied between 9 and 21 months of age. Rectal body temperature of HCR and LCR rats was measured before and after 1-year voluntary running/control intervention to explore the effects of aging and PA. Also, we determined whether injected glucose and spontaneous activity affect the body temperature differently between LCR and HCR rats at 9 vs. 21 months of age. HCRs had on average 1.3°C higher body temperature than LCRs (p < 0.001). Aging decreased the body temperature level of HCRs to similar levels with LCRs. The opportunity to run voluntarily had a significant impact on the body temperature of HCRs (p < 0.001) allowing them to maintain body temperature at a similar level as when at younger age. Compared to LCRs, HCRs were spontaneously more active, had higher relative gastrocnemius muscle mass and higher UCP2, PGC-1α, cyt c, and OXPHOS levels in the skeletal muscle (p < 0.050). These results suggest that higher PA level together with greater relative muscle mass and higher mitochondrial content/function contribute to the accumulation of heat in the HCRs. Interestingly, neither aging nor voluntary training had a significant impact on core body temperature of LCRs. However, glucose injection resulted in a lowering of the body temperature of LCRs (p < 0.050), but not that of HCRs. In conclusion, rats born with high intrinsic capacity for aerobic exercise and better health have higher body temperature compared to rats born with low exercise capacity and disease risk. Voluntary running allowed HCRs to maintain high body temperature during aging, which suggests that high PA level was crucial in maintaining the high body temperature of HCRs.
Clarke, Neil D; Thomas, James R; Kagka, Marion; Ramsbottom, Roger; Delextrat, Anne
2017-03-01
Clarke, ND, Thomas, JR, Kagka, M, Ramsbottom, R, and Delextrat, A. No dose-response effect of carbohydrate mouth rinse concentration on 5-km running performance in recreational athletes. J Strength Cond Res 31(3): 715-720, 2017-Oral carbohydrate rinsing has been demonstrated to provide beneficial effects on exercise performance of durations of up to 1 hour, albeit predominately in a laboratory setting. The aim of the present study was to investigate the effects of different concentrations of carbohydrate solution mouth rinse on 5-km running performance. Fifteen healthy men (n = 9; mean ± SD age; 42 ± 10 years; height, 177.6 ± 6.1 cm; body mass, 73.9 ± 8.9 kg) and women (n = 6; mean ± SD age, 43 ± 9 years; height, 166.5 ± 4.1 cm; body mass, 65.7 ± 6.8 kg) performed a 5-km running time trial on a track on 4 separate occasions. Immediately before starting the time trial and then after each 1 km, subjects rinsed 25 ml of 0, 3, 6, or 12% maltodextrin for 10 seconds. Mouth rinsing with 0, 3, 6, or 12% maltodextrin did not have a significant effect on the time to complete the time trial (0%, 26:34 ± 4:07 minutes:seconds; 3%, 27:17 ± 4:33 minutes:seconds; 6%, 27:05 ± 3:52 minutes:seconds; 12%, 26:47 ± 4.31 minutes:seconds; p = 0.071; (Equation is included in full-text article.)= 0.15), heart rate (p = 0.095; (Equation is included in full-text article.)= 0.16), rating of perceived exertion (p = 0.195; (Equation is included in full-text article.)= 0.11), blood glucose (p = 0.920; (Equation is included in full-text article.)= 0.01), and blood lactate concentration (p = 0.831; (Equation is included in full-text article.)= 0.02), with only nonsignificant trivial to small differences between concentrations. Results of this study suggest that carbohydrate mouth rinsing provides no ergogenic advantage over an acaloric placebo (0%) and that there is no dose-response relationship between carbohydrate solution concentration and 5-km track running performance.
Humans running in place on water at simulated reduced gravity.
Minetti, Alberto E; Ivanenko, Yuri P; Cappellini, Germana; Dominici, Nadia; Lacquaniti, Francesco
2012-01-01
On Earth only a few legged species, such as water strider insects, some aquatic birds and lizards, can run on water. For most other species, including humans, this is precluded by body size and proportions, lack of appropriate appendages, and limited muscle power. However, if gravity is reduced to less than Earth's gravity, running on water should require less muscle power. Here we use a hydrodynamic model to predict the gravity levels at which humans should be able to run on water. We test these predictions in the laboratory using a reduced gravity simulator. We adapted a model equation, previously used by Glasheen and McMahon to explain the dynamics of Basilisk lizard, to predict the body mass, stride frequency and gravity necessary for a person to run on water. Progressive body-weight unloading of a person running in place on a wading pool confirmed the theoretical predictions that a person could run on water, at lunar (or lower) gravity levels using relatively small rigid fins. Three-dimensional motion capture of reflective markers on major joint centers showed that humans, similarly to the Basilisk Lizard and to the Western Grebe, keep the head-trunk segment at a nearly constant height, despite the high stride frequency and the intensive locomotor effort. Trunk stabilization at a nearly constant height differentiates running on water from other, more usual human gaits. The results showed that a hydrodynamic model of lizards running on water can also be applied to humans, despite the enormous difference in body size and morphology.
Perturbed redshifts from N -body simulations
NASA Astrophysics Data System (ADS)
Adamek, Julian
2018-01-01
In order to keep pace with the increasing data quality of astronomical surveys the observed source redshift has to be modeled beyond the well-known Doppler contribution. In this article I want to examine the gauge issue that is often glossed over when one assigns a perturbed redshift to simulated data generated with a Newtonian N -body code. A careful analysis reveals the presence of a correction term that has so far been neglected. It is roughly proportional to the observed length scale divided by the Hubble scale and therefore suppressed inside the horizon. However, on gigaparsec scales it can be comparable to the gravitational redshift and hence amounts to an important relativistic effect.
THE SMALL BODY GEOPHYSICAL ANALYSIS TOOL
NASA Astrophysics Data System (ADS)
Bercovici, Benjamin; McMahon, Jay
2017-10-01
The Small Body Geophysical Analysis Tool (SBGAT) that we are developing aims at providing scientists and mission designers with a comprehensive, easy to use, open-source analysis tool. SBGAT is meant for seamless generation of valuable simulated data originating from small bodies shape models, combined with advanced shape-modification properties.The current status of SBGAT is as follows:The modular software architecture that was specified in the original SBGAT proposal was implemented in the form of two distinct packages: a dynamic library SBGAT Core containing the data structure and algorithm backbone of SBGAT, and SBGAT Gui which wraps the former inside a VTK, Qt user interface to facilitate user/data interaction. This modular development facilitates maintenance and addi- tion of new features. Note that SBGAT Core can be utilized independently from SBGAT Gui.SBGAT is presently being hosted on a GitHub repository owned by SBGAT’s main developer. This repository is public and can be accessed at https://github.com/bbercovici/SBGAT. Along with the commented code, one can find the code documentation at https://bbercovici.github.io/sbgat-doc/index.html. This code documentation is constently updated in order to reflect new functionalities.SBGAT’s user’s manual is available at https://github.com/bbercovici/SBGAT/wiki. This document contains a comprehensive tutorial indicating how to retrieve, compile and run SBGAT from scratch.Some of the upcoming development goals are listed hereafter. First, SBGAT's dynamics module will be extented: the PGM algorithm is the only type of analysis method currently implemented. Future work will therefore consists in broadening SBGAT’s capabilities with the Spherical Harmonics Expansion of the gravity field and the calculation of YORP coefficients. Second, synthetic measurements will soon be available within SBGAT. The software should be able to generate synthetic observations of different type (radar, lightcurve, point clouds,...) from the shape model currently manipulated. Finally, shape interaction capabilities will be added to SBGAT GUI, as it will be augmented with these functionalities using built-in VTK interaction methods.
Follow the Code: Rules or Guidelines for Academic Deans' Behavior?
ERIC Educational Resources Information Center
Bray, Nathaniel J.
2012-01-01
In the popular movie series "Pirates of the Caribbean," there is a pirate code that influences how pirates behave in unclear situations, with a running joke about whether the code is either a set of rules or guidelines for behavior. Codes of conduct in any social group or organization can have much the same feel; they can provide clarity and…
Kim, Seungsuk
2017-08-01
[Purpose] This study aimed to analyze the effects of complex training on carbon monoxide, cardiorespiratory function, and body mass among college students with the highest smoking rate among all age group. [Subjects and Methods] A total of 40 college students voluntarily participated in this study. All subjects smoked and were randomly divided into two groups: the experimental group (N=20) and the control group (N=20). The experimental group underwent complex training (30 min of training five times a week for 12 weeks) while the control group did not participate in such training. The complex training consisted of two parts: aerobic exercise (walking and running) and resistance exercise (weight training). [Results] Two-way ANOVA with repeated measures revealed significant interactions among CO, VO2max, HRmax, VEmax, body fat, and skeletal muscle mass, indicating that the changes were significantly different among groups. [Conclusion] A 12 week of complex physical exercise program would be an effective way to support a stop-smoking campaign as it quickly eliminates CO from the body and improves cardiorespiratory function and body condition.
Experiences with Cray multi-tasking
NASA Technical Reports Server (NTRS)
Miya, E. N.
1985-01-01
The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
Measured and calculated spectral radiation from a blunt body shock layer in an arc-jet wind tunnel
NASA Technical Reports Server (NTRS)
Babikian, Dikran S.; Palumbo, Giuseppe; Craig, Roger A.; Park, Chul; Palmer, Grant; Sharma, Surendra P.
1994-01-01
Spectra of the shock layer radiation incident on the stagnation point of a blunt body placed in an arc-jet wind tunnel were measured over the wavelength range from 600 nm to 880 nm. The test gas was a mixture of 80 percent air and 20 percent argon by mass, and the run was made in a highly nonequilibrium environment. The observed spectra contained contributions from atomic lines of nitrogen, oxygen, and argon, of bound-free and free-free continua, and band systems of N2 and N2(+). The measured spectra were compared with the synthetic spectra, which were obtained through four steps: the calculation of the arc-heater characteristics, of the nozzle flow, of the blunt-body flow, and the nonequilibrium radiation processes. The results show that the atomic lines are predicted approximately correctly, but all other sources are underpredicted by orders of magnitude. A possible explanation for the discrepancy is presented.
NASA Astrophysics Data System (ADS)
Diaz-Torres, Alexis
2011-04-01
A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The program is suited for a weakly-bound two-body projectile colliding with a stable target. The initial orientation of the segment joining the two breakup fragments is considered to be isotropic. Additional comments: Several source routines from Numerical Recipies, and the Mersenne Twister random number generator package are included to enable independent compilation. Running time: About 75 minutes for input provided, using a PC with 1.5 GHz processor.
CBP Toolbox Version 3.0 “Beta Testing” Performance Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, III, F. G.
2016-07-29
One function of the Cementitious Barriers Partnership (CBP) is to assess available models of cement degradation and to assemble suitable models into a “Toolbox” that would be made available to members of the partnership, as well as the DOE Complex. To this end, SRNL and Vanderbilt University collaborated to develop an interface using the GoldSim software to the STADIUM @ code developed by SIMCO Technologies, Inc. and LeachXS/ORCHESTRA developed by Energy research Centre of the Netherlands (ECN). Release of Version 3.0 of the CBP Toolbox is planned in the near future. As a part of this release, an increased levelmore » of quality assurance for the partner codes and the GoldSim interface has been developed. This report documents results from evaluation testing of the ability of CBP Toolbox 3.0 to perform simulations of concrete degradation applicable to performance assessment of waste disposal facilities. Simulations of the behavior of Savannah River Saltstone Vault 2 and Vault 1/4 concrete subject to sulfate attack and carbonation over a 500- to 1000-year time period were run using a new and upgraded version of the STADIUM @ code and the version of LeachXS/ORCHESTRA released in Version 2.0 of the CBP Toolbox. Running both codes allowed comparison of results from two models which take very different approaches to simulating cement degradation. In addition, simulations of chloride attack on the two concretes were made using the STADIUM @ code. The evaluation sought to demonstrate that: 1) the codes are capable of running extended realistic simulations in a reasonable amount of time; 2) the codes produce “reasonable” results; the code developers have provided validation test results as part of their code QA documentation; and 3) the two codes produce results that are consistent with one another. Results of the evaluation testing showed that the three criteria listed above were met by the CBP partner codes. Therefore, it is concluded that the codes can be used to support performance assessment. This conclusion takes into account the QA documentation produced for the partner codes and for the CBP Toolbox.« less
A Laboratory Test for the Examination of Alactic Running Performance
Kibele, Armin; Behm, David
2005-01-01
A new testing procedure is introduced to evaluate the alactic running performance in a 10s sprint task with near-maximal movement velocity. The test is performed on a motor-equipped treadmill with inverted polarity that increases mechanical resistance instead of driving the treadmill belt. As a result, a horizontal force has to be exerted against the treadmill surface in order to overcome the resistant force of the engine and to move the surface in a backward direction. For this task, subjects lean with their hands towards the front safety barrier of the treadmill railing with a slightly inclined body posture. The required skill resembles the pushing movement of bobsleigh pilots at the start of a race. Subjects are asked to overcome this mechanical resistance and to cover as much distance as possible within a time period of 10 seconds. Fifteen male students (age: 27.7 ± 4.1 years, body height: 1.82 ± 0.46 m, body mass: 78.3 ± 6.7 kg) participated in a study. As the resistance force was set to 134 N, subjects ran 35.4 ± 2.6 m on the average corresponding to a mean running velocity of 3.52 ± 0.25 m·s-1. The validity of the new test was examined by statistical inference with various measures related to alactic performance including a metabolic equivalent to estimate alactic capacity (2892 ± 525 mL O2), an estimate for the oxygen debt (2662 ± 315 ml), the step test by Margaria to estimate alactic energy flow (1691 ± 171 W), and a test to measure the maximal strength in the leg extensor muscles (2304 ± 351 N). The statistical evaluation showed that the new test is in good agreement with the theoretical assumptions for alactic performance. Significant correlation coefficients were found between the test criteria and the measures for alactic capacity (r = 0.79, p < 0.01) as well as alactic power (r = 0.77, p < 0.01). The testing procedure is easy to administer and it is best suited to evaluate the alactic capacity for bobsleigh pilots as well as for any other running discipline. Key Points New testing procedure for the evaluation of alactic running performance. 10s treadmill sprint task with near-maximal movement velocity similar to a bob sleigh start. Treadmill motor is used with inverted polarity to establish mechanical resistance rather than acceleration. Highly significant correlations found between test criteria and alactic performance measures. PMID:24501570
Fuller, Joel T; Thewlis, Dominic; Buckley, Jonathan D; Brown, Nicholas A T; Hamill, Joseph; Tsiros, Margarita D
2017-04-01
Minimalist shoes have been popularized as a safe alternative to conventional running shoes. However, a paucity of research is available investigating the longer-term safety of minimalist shoes. To compare running-related pain and injury between minimalist and conventional shoes in trained runners and to investigate interactions between shoe type, body mass, and weekly training distance. Randomized clinical trial; Level of evidence, 2. Sixty-one trained, habitual rearfoot footfall runners (mean ± SD: body mass, 74.6 ± 9.3 kg; weekly training distance, 25 ± 14 km) were randomly allocated to either minimalist or conventional shoes. Runners gradually increased the time spent running in their allocated shoes over 26 weeks. Running-related pain intensity was measured weekly by use of 100-mm visual analog scales. Time to first running-related injury was also assessed. Interactions were found between shoe type and weekly training distance for weekly running-related pain; greater pain was experienced with minimalist shoes ( P < .05), and clinically meaningful increases (>10 mm) were noted when the weekly training distance was more than 35 km/wk. Eleven of 30 runners sustained an injury in conventional shoes compared with 16 of 31 runners in minimalist shoes (hazard ratio, 1.64; 95% confidence interval, 0.63-4.27; P = .31). A shoe × body mass interaction was found for time to first running-related injury ( P = .01). For runners using minimalist shoes, relative to runners using conventional shoes, the risk of sustaining an injury became more likely with increasing body mass above 71.4 kg, and the risk was moderately increased (hazard ratio, 2.00; 95% confidence interval, 1.10-3.66; P = .02) for runners using minimalist shoes who had a body mass of 85.7 kg. Runners should limit weekly training distance in minimalist shoes to avoid running-related pain. Heavier runners are at greater risk of injury when running in minimalist shoes. Registration: Australian New Zealand Clinical Trials Registry (ACTRN12613000642785).
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Long-run growth rate in a random multiplicative model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirjol, Dan
2014-08-01
We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas inmore » thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.« less
Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts
NASA Astrophysics Data System (ADS)
Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.
2015-12-01
The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.
Rosen, Lisa M.; Liu, Tao; Merchant, Roland C.
2016-01-01
BACKGROUND Blood and body fluid exposures are frequently evaluated in emergency departments (EDs). However, efficient and effective methods for estimating their incidence are not yet established. OBJECTIVE Evaluate the efficiency and accuracy of estimating statewide ED visits for blood or body fluid exposures using International Classification of Diseases, Ninth Revision (ICD-9), code searches. DESIGN Secondary analysis of a database of ED visits for blood or body fluid exposure. SETTING EDs of 11 civilian hospitals throughout Rhode Island from January 1, 1995, through June 30, 2001. PATIENTS Patients presenting to the ED for possible blood or body fluid exposure were included, as determined by prespecified ICD-9 codes. METHODS Positive predictive values (PPVs) were estimated to determine the ability of 10 ICD-9 codes to distinguish ED visits for blood or body fluid exposure from ED visits that were not for blood or body fluid exposure. Recursive partitioning was used to identify an optimal subset of ICD-9 codes for this purpose. Random-effects logistic regression modeling was used to examine variations in ICD-9 coding practices and styles across hospitals. Cluster analysis was used to assess whether the choice of ICD-9 codes was similar across hospitals. RESULTS The PPV for the original 10 ICD-9 codes was 74.4% (95% confidence interval [CI], 73.2%–75.7%), whereas the recursive partitioning analysis identified a subset of 5 ICD-9 codes with a PPV of 89.9% (95% CI, 88.9%–90.8%) and a misclassification rate of 10.1%. The ability, efficiency, and use of the ICD-9 codes to distinguish types of ED visits varied across hospitals. CONCLUSIONS Although an accurate subset of ICD-9 codes could be identified, variations across hospitals related to hospital coding style, efficiency, and accuracy greatly affected estimates of the number of ED visits for blood or body fluid exposure. PMID:22561713
HNBody: A Simulation Package for Hierarchical N-Body Systems
NASA Astrophysics Data System (ADS)
Rauch, Kevin P.
2018-04-01
HNBody (http://www.hnbody.org/) is an extensible software package forintegrating the dynamics of N-body systems. Although general purpose, itincorporates several features and algorithms particularly well-suited tosystems containing a hierarchy (wide dynamic range) of masses. HNBodyversion 1 focused heavily on symplectic integration of nearly-Kepleriansystems. Here I describe the capabilities of the redesigned and expandedpackage version 2, which includes: symplectic integrators up to eighth order(both leap frog and Wisdom-Holman type methods), with symplectic corrector andclose encounter support; variable-order, variable-timestep Bulirsch-Stoer andStörmer integrators; post-Newtonian and multipole physics options; advancedround-off control for improved long-term stability; multi-threading and SIMDvectorization enhancements; seamless availability of extended precisionarithmetic for all calculations; extremely flexible configuration andoutput. Tests of the physical correctness of the algorithms are presentedusing JPL Horizons ephemerides (https://ssd.jpl.nasa.gov/?horizons) andpreviously published results for reference. The features and performanceof HNBody are also compared to several other freely available N-body codes,including MERCURY (Chambers), SWIFT (Levison & Duncan) and WHFAST (Rein &Tamayo).
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2011-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artefacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artefacts.
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2012-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artifacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artifacts.
Skeletal Maturation and Aerobic Performance in Young Soccer Players from Professional Academies.
Teixeira, A S; Valente-dos-Santos, J; Coelho-E-Silva, M J; Malina, R M; Fernandes-da-Silva, J; Cesar do Nascimento Salvador, P; De Lucas, R D; Wayhs, M C; Guglielmo, L G A
2015-11-01
The contribution of chronological age, skeletal age (Fels method) and body size to variance in peak velocity derived from the Carminatti Test was examined in 3 competitive age groups of Brazilian male soccer players: 10-11 years (U-12, n=15), 12-13 years (U-14, n=54) and 14-15 years (U-16, n=23). Body size and soccer-specific aerobic fitness were measured. Body composition was predicted from skinfolds. Analysis of variance and covariance (controlling for chronological age) were used to compare soccer players by age group and by skeletal maturity status within of each age group, respectively. Relative skeletal age (skeletal age minus chronological age), body size, estimated fat-free mass and performance on the Carminatti Test increased significantly with age. Carminatti Test performance did not differ among players of contrasting skeletal maturity status in the 3 age groups. Results of multiple linear regressions indicated fat mass (negative) and chronological age (positive) were significant predictors of peak velocity derived from the Carminatti Test, whereas skeletal age was not a significant predictor. In conclusion, the Carminatti Test appears to be a potentially interesting field protocol to assess intermittent endurance running capacity in youth soccer programs since it is independent of biological maturity status. © Georg Thieme Verlag KG Stuttgart · New York.
In situ calibration of neutron activation system on the large helical device
NASA Astrophysics Data System (ADS)
Pu, N.; Nishitani, T.; Isobe, M.; Ogawa, K.; Kawase, H.; Tanaka, T.; Li, S. Y.; Yoshihashi, S.; Uritani, A.
2017-11-01
In situ calibration of the neutron activation system on the Large Helical Device (LHD) was performed by using an intense 252Cf neutron source. To simulate a ring-shaped neutron source, we installed a railway inside the LHD vacuum vessel and made a train loaded with the 252Cf source run along a typical magnetic axis position. Three activation capsules loaded with thirty pieces of indium foils stacked with total mass of approximately 18 g were prepared. Each capsule was irradiated over 15 h while the train was circulating. The activation response coefficient (9.4 ± 1.2) × 10-8 of 115In(n, n')115mIn reaction obtained from the experiment is in good agreement with results from three-dimensional neutron transport calculations using the Monte Carlo neutron transport simulation code 6. The activation response coefficients of 2.45 MeV birth neutron and secondary 14.1 MeV neutron from deuterium plasma were evaluated from the activation response coefficient obtained in this calibration experiment with results from three-dimensional neutron calculations using the Monte Carlo neutron transport simulation code 6.
NASA Astrophysics Data System (ADS)
Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens
2015-04-01
Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.
RETURN TO RUNNING FOLLOWING A KNEE DISARTICULATION AMPUTATION: A CASE REPORT
Diebal-Lee, Angela R.; Kuenzi, Robert S.; Rábago, Christopher A.
2017-01-01
Background and Purpose The evolution of running-specific prostheses has empowered athletes with lower extremity amputations to run farther and faster than previously thought possible; but running with proper mechanics is still paramount to an injury-free, active lifestyle. The purpose of this case report was to describe the successful alteration of intact limb mechanics from a Rearfoot Striking (RFS) to a Non-Rearfoot Striking (NRFS) pattern in an individual with a knee disarticulation amputation, the associated reduction in Average Vertical Loading Rate (AVLR), and the improvement in functional performance following the intervention. Case description A 30 year-old male with a traumatic right knee disarticulation amputation reported complaints of residual limb pain with running distances greater than 5 km, limiting his ability to train toward his goal of participating in triathlons. Qualitative assessment of his running mechanics revealed a RFS pattern with his intact limb and a NRFS pattern with his prosthetic limb. A full body kinematic and kinetic running analysis using 3D motion capture and force plates was performed. The average intact limb loading rate was four-times greater (112 body weights/s) than in his prosthetic limb which predisposed him to possible injury. He underwent a three week running intervention with a certified running specialist to learn a NRFS pattern with his intact limb. Outcomes Immediately following the running intervention, he was able to run distances of over 10 km without pain. On a two-mile fitness test, he decreased his run time from 19:54 min to 15:14 min. Additionally, the intact limb loading rate was dramatically reduced to 27 body weights/s, nearly identical to the prosthetic limb (24 body weights/s). Discussion This case report outlines a detailed return to run program that targets proprioceptive and neuromuscular components, injury prevention, and specificity of training strategies. The outcomes of this case report are promising as they may spur additional research toward understanding how to eliminate potential injury risk factors associated with running after limb loss. Level of Evidence 4 PMID:28900572
Raibert, M H
1986-03-14
Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.
Schulze, Stephan; Schwesig, René; Edel, Melanie; Fieseler, Georg; Delank, Karl-Stefan; Hermassi, Souhail; Laudner, Kevin G
2017-10-01
To obtain spatiotemporal and dynamic running parameters of healthy participants and to identify relationships between running parameters, speed, and physical characteristics. A dynamometric treadmill was used to collect running data among 417 asymptomatic subjects during speeds ranging from 10 to 24km/h. Spatiotemporal and dynamic running parameters were calculated and measured. Results of the analyses showed that assessing running parameters is dependent on running speed. Body height correlated with stride length (r=0.5), cadence (r=-0.5) and plantar forefoot force (r=0.6). Body mass also had a strong relationship to plantar forefoot forces at 14 and 24km/h and plantar midfoot forces at 14 and 24km/h. This reference data base can be used in the kinematic and kinetic evaluation of running under a wide range of speeds. Copyright © 2017 Elsevier B.V. All rights reserved.
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
2016-08-01
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Antiplagiarism Software Takes on the Honor Code
ERIC Educational Resources Information Center
Wasley, Paula
2008-01-01
Among the 100-odd colleges with academic honor codes, plagiarism-detection services raise a knotty problem: Is software compatible with a system based on trust? The answer frequently devolves to the size and culture of the university. Colleges with traditional student-run honor codes tend to "forefront" trust, emphasizing it above all else. This…
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
Smoothed Particle Hydrodynamic Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-10-05
This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.
runDM: Running couplings of Dark Matter to the Standard Model
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2018-02-01
runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.
The Initial Conditions and Evolution of Isolated Galaxy Models: Effects of the Hot Gas Halo
NASA Astrophysics Data System (ADS)
Hwang, Jeong-Sun; Park, Changbom; Choi, Jun-Hwan
2013-02-01
We construct several Milky Way-like galaxy models containing a gas halo (as well as gaseous and stellar disks, a dark matter halo, and a stellar bulge) following either an isothermal or an NFW density profile with varying mass and initial spin. In addition, galactic winds associated with star formation are tested in some of the simulations. We evolve these isolated galaxy models using the GADGET-3 N-body/hydrodynamic simulation code, paying particular attention to the effects of the gaseous halo on the evolution. We find that the evolution of the models is strongly affected by the adopted gas halo component, particularly in the gas dissipation and the star formation activity in the disk. The model without a gas halo shows an increasing star formation rate (SFR) at the beginning of the simulation for some hundreds of millions of years and then a continuously decreasing rate to the end of the run at 3 Gyr. Whereas the SFRs in the models with a gas halo, depending on the density profile and the total mass of the gas halo, emerge to be either relatively flat throughout the simulations or increasing until the middle of the run (over a gigayear) and then decreasing to the end. The models with the more centrally concentrated NFW gas halo show overall higher SFRs than those with the isothermal gas halo of the equal mass. The gas accretion from the halo onto the disk also occurs more in the models with the NFW gas halo, however, this is shown to take place mostly in the inner part of the disk and not to contribute significantly to the star formation unless the gas halo has very high density at the central part. The rotation of a gas halo is found to make SFR lower in the model. The SFRs in the runs including galactic winds are found to be lower than those in the same runs but without winds. We conclude that the effects of a hot gaseous halo on the evolution of galaxies are generally too significant to be simply ignored. We also expect that more hydrodynamical processes in galaxies could be understood through numerical simulations employing both gas disk and gas halo components.
The Need for Vendor Source Code at NAS. Revised
NASA Technical Reports Server (NTRS)
Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
A performance comparison of the Cray-2 and the Cray X-MP
NASA Technical Reports Server (NTRS)
Schmickley, Ronald; Bailey, David H.
1986-01-01
A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.
Working research codes into fluid dynamics education: a science gateway approach
NASA Astrophysics Data System (ADS)
Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.
2017-11-01
Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
In situ and in-transit analysis of cosmological simulations
Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...
2016-08-24
Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Close encounters of the third-body kind. [intruding bodies in binary star systems
NASA Technical Reports Server (NTRS)
Davies, M. B.; Benz, W.; Hills, J. G.
1994-01-01
We simulated encounters involving binaries of two eccentricities: e = 0 (i.e., circular binaries) and e = 0.5. In both cases the binary contained a point mass of 1.4 solar masses (i.e., a neutron star) and a 0.8 solar masses main-sequence star modeled as a polytrope. The semimajor axes of both binaries were set to 60 solar radii (0.28 AU). We considered intruders of three masses: 1.4 solar masses (a neutron star), 0.8 solar masses (a main-sequence star or a higher mass white dwarf), and 0.64 solar masses (a more typical mass white dwarf). Our strategy was to perform a large number (40,000) of encounters using a three-body code, then to rerun a small number of cases with a three-dimensional smoothed particle hydrodynamics (SPH) code to determine the importance of hydrodynamical effects. Using the results of the three-body runs, we computed the exchange across sections, sigma(sub ex). From the results of the SPH runs, we computed the cross sections for clean exchange, denoted by sigma(sub cx); the formation of a triple system, denoted by sigma(sub trp); and the formation of a merged binary with an object formed from the merger of two of the stars left in orbit around the third star, denoted by sigma(sub mb). For encounters between either binary and a 1.4 solar masses neutron star, sigma(sub cx) approx. 0.7 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 0.3 sigma(sub ex). For encounters between either binary and the 0.8 solar masses main-sequence star, sigma(sub cx) approx. 0.50 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.0 sigma(sub ex). If the main sequence star is replaced by a main-sequence star of the same mass, we have sigma(sub cx) approx. 0.5 sigma(sub ex) and sigma(sub mb) + sigma(sub trp) approx. 1.6 sigma(sub ex). Although the exchange cross section is a sensitive function of intruder mass, we see that the cross section to produce merged binaries is roughly independent of intruder mass. The merged binaries produced have semi-major axes much larger than either those of the original binaries or those of binaries produced in clean exchanges. Coupled with their lower kick velocities, received from the encounters, their larger size will enhance their cross section, shortening the waiting time to a subsequent encounter with another single star.
Buresh, Robert; Berg, Kris; Noble, John
2005-09-01
The purposes of this study were to determine the relationships between: (a) measures of body size/composition and heat production/storage, and (b) heat production/storage and heart rate (HR) drift during running at 95% of the velocity that elicited lactate threshold, which was determined for 20 healthy recreational male runners. Subsequently, changes in skin and tympanic temperatures associated with a vigorous 20-min run, HR, and VO2 data were recorded. It was found that heat production was significantly correlated with body mass (r = .687), lean mass (r = .749), and body surface area (BSA, r = .699). Heat storage was significantly correlated with body mass (r = .519), fat mass (r = .464), and BSA (r = .498). The percentage of produced heat stored was significantly correlated with body mass (r = .427), fat mass (r = .455), and BSA (r = .414). Regression analysis showed that the sum of body mass, percentage of body fat, BSA, lean mass, and fat mass accounted for 30% of the variability in heat storage. It was also found that HR drift was significantly correlated with heat storage (r = .383), percentage of produced heat stored (r = .433), and core temperature change (r = .450). It was concluded that heavier runners experienced greater heat production, heat storage, and core temperature increases than lighter runners during vigorous running.
Partitioning the Metabolic Cost of Human Running: A Task-by-Task Approach
Arellano, Christopher J.; Kram, Rodger
2014-01-01
Compared with other species, humans can be very tractable and thus an ideal “model system” for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the “cost of generating force” hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be “individually” partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. PMID:24838747
Partitioning the metabolic cost of human running: a task-by-task approach.
Arellano, Christopher J; Kram, Rodger
2014-12-01
Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Castonguay, Andree L; Gilchrist, Jenna D; Mack, Diane E; Sabiston, Catherine M
2013-06-01
This study explored body-related emotional experiences of pride in young adult males (n=138) and females (n=165). Data were collected using a relived emotion task and analyzed using inductive content analysis. Thirty-nine codes were identified and grouped into six categories (triggers, contexts, cognitive attributions, and affective, cognitive, and behavioral outcomes) for each of two themes (hubristic and authentic pride). Hubristic pride triggers included evaluating appearance/fitness as superior. Cognitions centered on feelings of superiority. Behaviors included strategies to show off. Triggers for authentic pride were personal improvements/maintenance in appearance and meeting or exceeding goals. Feeling accomplished was a cognitive outcome, and physical activity was a behavioral strategy. Contexts for the experience of both facets of pride primarily involved sports settings, swimming/beach, and clothes shopping. These findings provide theoretical support for models of pride as it applies to body image, and advances conceptual understanding of positive body image. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, F.; Zimmerman, B.; Heard, F.
A number of N Reactor core heatup studies have been performed using the TRUMP-BD computer code. These studies were performed to address questions concerning the dependency of results on potential variations in the material properties and/or modeling assumptions. This report described and documents a series of 31 TRUMP-BD runs that were performed to determine the sensitivity of calculated inner-fuel temperatures to a variety of TRUMP input parameters and also to a change in the node density in a high-temperature-gradient region. The results of this study are based on the 32-in. model. 18 refs., 17 figs., 2 tab.
Hamlin, Michael J; Fraser, Meegan; Lizamore, Catherine A; Draper, Nick; Shearman, Jeremy P; Kimber, Nicholas E
2014-03-27
Body fat and maturation both influence cardiorespiratory fitness, however few studies have taken these variables into account when using field tests to predict children's fitness levels. The purpose of this study was to determine the relationship between two field tests of cardiorespiratory fitness (20 m Maximal Multistage Shuttle Run [20-MST], 550 m distance run [550-m]) and direct measurement of VO2max after adjustment for body fatness and maturity levels. Fifty-three participants (25 boys, 28 girls, age 10.6 ± 1.2 y, mean ± SD) had their body fat levels estimated using bioelectrical impedance (16.6% ± 6.0% and 20.0% ± 5.8% for boys and girls, respectively). Participants performed in random order, the 20-MST and 550-m run followed by a progressive treadmill test to exhaustion during which gas exchange measures were taken. Pearson correlation coefficient analysis revealed that the participants' performance in the 20-MST and 550-m run were highly correlated to VO2max obtained during the treadmill test to exhaustion (r = 0.70 and 0.59 for 20-MST and 550-m run, respectively). Adjusting for body fatness and maturity levels in a multivariate regression analysis increased the associations between the field tests and VO2max (r = 0.73 for 20-MST and 0.65 for 550-m). We may conclude that both the 20-MST and the 550-m distance run are valid field tests of cardiorespiratory fitness in New Zealand 8-13 year old children and incorporating body fatness and maturity levels explains an additional 5-7% of the variance.
Yan, Lin; Sundaram, Sneha; Nielsen, Forrest H
2017-11-01
This study investigated the effect of voluntary running of defined distances on body adiposity in male C57BL/6 mice fed a high-fat diet. Mice were assigned to 6 groups and fed a standard AIN93G diet (sedentary) or a modified high-fat AIN93G diet (sedentary; unrestricted running; or 75%, 50%, or 25% of unrestricted running) for 12 weeks. The average running distance was 8.3, 6.3, 4.2, and 2.1 km/day for the unrestricted, 75%, 50%, and 25% of unrestricted runners, respectively. Body adiposity was 46% higher in sedentary mice when fed the high-fat diet instead of the standard diet. Running decreased adiposity in mice fed the high-fat diet in a dose-dependent manner but with no significant difference between sedentary mice and those running 2.1 km/day. In sedentary mice, the high-fat instead of the standard diet increased insulin resistance, hepatic triacylglycerides, and adipose and plasma concentrations of leptin and monocyte chemotactic protein-1 (MCP-1). Running reduced these variables in a dose-dependent manner. Adipose adiponectin was lowest in sedentary mice fed the high-fat diet; running raised adiponectin in both adipose tissue and plasma. Running 8.3 and 6.3 km/day had the greatest, but similar, effects on the aforementioned variables. Running 2.1 km/day did not affect these variables except, when compared with sedentariness, it significantly decreased MCP-1. The findings showed that running 6.3 kg/day was optimal for reducing adiposity and associated inflammation that was increased in mice by feeding a high-fat diet. The findings suggest that voluntary running of defined distances may counteract the obesogenic effects of a high-fat diet.
Quality of head injury coding from autopsy reports with AIS © 2005 update 2008.
Schick, Sylvia; Humrich, Anton; Graw, Matthias
2018-02-28
ABSTACT Objective: Coding injuries from autopsy reports of traffic accident victims according to Abbreviated Injury Scale AIS © 2005 update 2008 [1] is quite time consuming. The suspicion arose, that many issues leading to discussion between coder and control reader were based on information required by the AIS that was not documented in the autopsy reports. To quantify this suspicion, we introduced an AIS-detail-indicator (AIS-DI). To each injury in the AIS Codebook one letter from A to N was assigned indicating the level of detail. Rules were formulated to receive repeatable assignments. This scheme was applied to a selection of 149 multiply injured traffic fatalities. The frequencies of "not A" codes were calculated for each body region and it was analysed, why the most detailed level A had not been coded. As a first finding, the results of the head region are presented. 747 AIS head injury codes were found in 137 traffic fatalities, and 60% of these injuries were coded with an AIS-DI of level A. There are three different explanations for codes of AIS-DI "not A": Group 1 "Missing information in autopsy report" (5%), Group 2 "Clinical data required by AIS" (20%), and Group 3 "AIS system determined" (15%). Groups 1 and 2 show consequences for the ISS in 25 cases. Other body regions might perform differently. The AIS-DI can indicate the quality of the underlying data basis and, depending on the aims of different AIS users it can be a helpful tool for quality checks.
Direct Large-Scale N-Body Simulations of Planetesimal Dynamics
NASA Astrophysics Data System (ADS)
Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George
2000-01-01
We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.
The Use of 2 Conditioning Programs and the Fitness Characteristics of Police Academy Cadets.
Cocke, Charles; Dawes, Jay; Orr, Robin Marc
2016-11-01
Police academy training must physically prepare cadets for the rigors of their occupational tasks to prevent injury and allow them to adequately perform their duties. To compare the effects of 2 physical training programs on multiple fitness measures in police cadets. Cohort study. Police training academy. We collected data from 70 male (age = 27.4 ± 5.9 years, body weight = 85.4 ± 11.8 kg) and 20 female (age = 30.5 ± 5.8 years, body weight = 62.8 ± 11.0 kg) police cadets and analyzed data from 61 male cadets (age = 27.5 ± 5.5 years, body weight = 87.7 ± 13.2 kg). Participants completed one of two 6-month training programs. The randomized training group (RTG; n = 50), comprising 4 separate and sequential groups (n = 13, n = 11, n = 13, n = 13), completed a randomized training program that incorporated various strength and endurance exercises chosen on the day of training. The periodized group (PG; n = 11) completed a periodized training program that alternated specific phases of training. Anthropometric fitness measures were body weight, fat mass, and lean body mass. Muscular and metabolic fitness measures were 1-repetition maximum bench press, push-up and sit-up repetitions performed in 1 minute, vertical jump, 300-m sprint, and 2.4-km run. The RTG demonstrated improvements in all outcome measures between pretraining and posttraining; however, the improvements varied among the 4 individual RTGs. Conversely, the PG displayed improvements in only 3 outcome measures (push-ups, sit-ups, and 300-m sprint) but approached the level of significance set for this study (P < .01) in body weight, fat mass, and 1-repetition maximum bench press. Regardless of format, physical training programs can improve the fitness of tactical athletes. In general, physical fitness measures appeared to improve more in the RTG than in the PG. However, this observation varied among groups, and injury rates were not compared.
A comparison of five benchmarks
NASA Technical Reports Server (NTRS)
Huss, Janice E.; Pennline, James A.
1987-01-01
Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.
ERIC Educational Resources Information Center
Knechtle, Beat; Duff, Brida; Welzel, Ulrich; Kohler, Gotz
2009-01-01
In the present study, we investigated the association of anthropometric parameters with race performance in ultraendurance runners in a multistage ultraendurance run, in which athletes had to run 338 km within 5 consecutive days. In 17 male successful finishers, calculations of body mass, body height, skinfold thicknesses, extremity circumference,…
Goldsmith, Kaitlyn M; Byers, E Sandra
2016-06-01
This study investigated the messages individuals receive from their partners about their bodies and their perceived impact on body image and sexual well-being. Young adult men (n=35) and women (n=57) completed open-ended questions identifying messages they had received from partners and the impact of these messages on their body image and sexual well-being. Content coding revealed three verbal (expressions of approval and pride, challenging negative beliefs, expressions of sexual attraction/arousal/desire) and two nonverbal (physical affection, physical expressions of sexual attraction/arousal/desire) positive messages as well as one verbal (disapproval/disgust) and two nonverbal (rejection, humiliation) negative messages. Some participants reported gender-related messages (muscularity/strength, genital appearance, breast appearance, weight, and comparison to others). Positive messages were seen to increase confidence, self-acceptance, and sexual empowerment/fulfillment, whereas negative messages decreased these feelings. Our findings suggest that even everyday, seemingly neutral messages are perceived to have an important impact on young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fluid retention, muscle damage, and altered body composition at the Ultraman triathlon.
Baur, Daniel A; Bach, Christopher W; Hyder, William J; Ormsbee, Michael J
2016-03-01
The primary purpose of this investigation was to determine the effects of participation in a 3-day multistage ultraendurance triathlon (stage 1 = 10 km swim, 144.8 km bike; stage 2 = 275.4 km bike; stage 3 = 84.4 km run) on body mass and composition, hydration status, hormones, muscle damage, and blood glucose. Eighteen triathletes (mean ± SD; age 41 ± 7.5 years; height 175 ± 9 cm; weight 73.5 ± 9.8 kg; male n = 14, female n = 4) were assessed before and after each stage of the race. Body mass and composition were measured via bioelectrical impedance, hydration status via urine specific gravity, hormones and muscle damage via venous blood draw, and blood glucose via fingerstick. Following the race, significant changes included reductions in body mass (qualified effect size: trivial), fat mass (moderate), and percent body fat (small); increases in percent total body water (moderate) and urine specific gravity (large); and unchanged absolute total body water and fat-free mass. There were also extremely large increases in creatine kinase, C-reactive protein, aldosterone and cortisol combined with reductions in testosterone (small) and the testosterone:cortisol ratio (moderate). There were associations between post-race aldosterone and total body water (r = -0.504) and changes in cortisol and fat-free mass (r = -0.536). Finally, blood glucose increased in a stepwise manner prior to each stage. Participation in Ultraman Florida leads to fluid retention and dramatic alterations in body composition, muscle health, hormones, and metabolism.
High Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2008-01-01
Taylor series integration is implemented in a spacecraft trajectory analysis code-the Spacecraft N-body Analysis Program (SNAP) - and compared with the code s existing eighth-order Runge-Kutta Fehlberg time integration scheme. Nine trajectory problems, including near Earth, lunar, Mars and Europa missions, are analyzed. Head-to-head comparison at five different error tolerances shows that, on average, Taylor series is faster than Runge-Kutta Fehlberg by a factor of 15.8. Results further show that Taylor series has superior convergence properties. Taylor series integration proves that it can provide rapid, highly accurate solutions to spacecraft trajectory problems.
Malataras, G; Kappas, C; Lovelock, D M; Mohan, R
1997-01-01
This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.
Secure web-based invocation of large-scale plasma simulation codes
NASA Astrophysics Data System (ADS)
Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.
2004-12-01
We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.
Computer modeling of thermoelectric generator performance
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.; Shields, V.
1982-01-01
Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.
Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.
Mustroph, M L; Pinardo, H; Merritt, J R; Rhodes, J S
2016-10-01
Evidence suggests that 4 weeks of voluntary wheel running abolishes conditioned place preference (CPP) for cocaine in male C57BL/6J mice. To determine the duration and timing of exposure to running wheels necessary to reduce CPP, and the extent to which the running per se influences CPP as compared to environmental enrichment without running. A total of 239 males were conditioned for 4days twice daily with cocaine (10mg/kg) and then split into 7 intervention groups prior to 4days of CPP testing. Experiment 1 consisted of two groups housed as follows: short sedentary group (SS; n=20) in normal cages for 1 week; the short running group (SR; n=20) with running wheels for 1 week. Experiment 2 consisted of five groups housed as follows; short 1 week of running followed by a 3 week sedentary period (SRS; n=20); a 3 week sedentary period followed by 1 week of running (SSR; n=20); long sedentary group (LS; n=66) in normal cages for 4 weeks; long running group (LR; n=66) with running wheels for 4 weeks; and long environmental enrichment group (EE; n=27) with toys for 4 weeks. Levels of running were similar in all running groups. Both running and environmental enrichment reduced CPP relative to sedentary groups. Results suggest that the abolishment of cocaine CPP from running is robust and occurs with as low as 1 week of intervention but may be related to enrichment component of running rather than physical activity. Copyright © 2016 Elsevier B.V. All rights reserved.
Rector, R Scott; Loethen, Joanne; Ruebel, Meghan; Thomas, Tom R; Hinton, Pamela S
2009-10-01
Weight loss improves metabolic fitness and reduces morbidity and mortality; however, weight reduction also reduces bone mineral density (BMD) and increases bone turnover. Weight-bearing aerobic exercise may preserve bone mass and maintain normal bone turnover during weight reduction. We investigated the impact of weight-bearing and nonweight-bearing exercise on serum markers of bone formation and breakdown during short-term, modest weight loss in overweight premenopausal women. Subjects (n = 36) were assigned to 1 of 3 weight-loss interventions designed to produce a 5% reduction in body weight over 6 weeks: (i) energy restriction only (n = 11; DIET); (ii) energy restriction plus nonweight-bearing exercise (n = 12, CYCLE); or (iii) energy restriction plus weight-bearing exercise (n = 13, RUN). Bone turnover markers were measured in serum collected at baseline and after weight loss. All groups achieved a ~5% reduction in body weight (DIET = 5.2%; CYCLE = 5.0%; RUN = 4.7%). Osteocalcin (OC) and C-terminal telopeptide of type I collagen (CTX) increased with weight loss in all 3 groups (p < 0.05), whereas bone alkaline phosphatase was unaltered by the weight-loss interventions. At baseline, OC and CTX were positively correlated (r = 0.36, p = 0.03), but the strength of this association was diminished (r = 0.30, p = 0.06) after weight loss. Modest weight loss, regardless of method, resulted in a significant increase in both OC and CTX. Low-impact, weight-bearing exercise had no effect on serum markers of bone formation or resorption in premenopausal women during weight loss. Future studies that examine the effects of high-impact, weight-bearing activity on bone turnover and BMD during weight loss are warranted.
Liu, Tzu-Wen; Park, Young-Min; Holscher, Hannah D.; Padilla, Jaume; Scroggins, Rebecca J.; Welly, Rebecca; Britton, Steven L.; Koch, Lauren G.; Vieira-Potter, Victoria J.; Swanson, Kelly S.
2015-01-01
The gut microbiota is considered a relevant factor in obesity and associated metabolic diseases, for which postmenopausal women are particularly at risk. Increasing physical activity has been recognized as an efficacious approach to prevent or treat obesity, yet the impact of physical activity on the microbiota remains under-investigated. We examined the impacts of voluntary exercise on host metabolism and gut microbiota in ovariectomized (OVX) high capacity (HCR) and low capacity running (LCR) rats. HCR and LCR rats (age = 27wk) were OVX and fed a high-fat diet (45% kcal fat) ad libitum and housed in cages equipped with (exercise, EX) or without (sedentary, SED) running wheels for 11wk (n = 7-8/group). We hypothesized that increased physical activity would hinder weight gain, increase metabolic health and shift the microbiota of LCR rats, resulting in populations more similar to that of HCR rats. Animals were compared for characteristic metabolic parameters including body composition, lipid profile and energy expenditure; whereas cecal digesta were collected for DNA extraction. 16S rRNA gene-based amplicon Illumina MiSeq sequencing was performed, followed by analysis using QIIME 1.8.0 to assess cecal microbiota. Voluntary exercise decreased body and fat mass, and normalized fasting NEFA concentrations of LCR rats, despite only running one-third the distance of HCR rats. Exercise, however, increased food intake, weight gain and fat mass of HCR rats. Exercise clustered the gut microbial community of LCR rats, which separated them from the other groups. Assessments of specific taxa revealed significant (p<0.05) line by exercise interactions including shifts in the abundances of Firmicutes, Proteobacteria, and Cyanobacteria. Relative abundance of Christensenellaceae family was higher (p = 0.026) in HCR than LCR rats, and positively correlated (p<0.05) with food intake, body weight and running distance. These findings demonstrate that exercise differentially impacts host metabolism and gut microbial communities of female HCR and LCR rats without ovarian function. PMID:26301712
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald
2012-01-01
Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.
Quick Response codes for surgical safety: a prospective pilot study.
Dixon, Jennifer L; Smythe, William Roy; Momsen, Lara S; Jupiter, Daniel; Papaconstantinou, Harry T
2013-09-01
Surgical safety programs have been shown to reduce patient harm; however, there is variable compliance. The purpose of this study is to determine if innovative technology such as Quick Response (QR) codes can facilitate surgical safety initiatives. We prospectively evaluated the use of QR codes during the surgical time-out for 40 operations. Feasibility and accuracy were assessed. Perceptions of the current time-out process and the QR code application were evaluated through surveys using a 5-point Likert scale and binomial yes or no questions. At baseline (n = 53), survey results from the surgical team agreed or strongly agreed that the current time-out process was efficient (64%), easy to use (77%), and provided clear information (89%). However, 65% of surgeons felt that process improvements were needed. Thirty-seven of 40 (92.5%) QR codes scanned successfully, of which 100% were accurate. Three scan failures resulted from excessive curvature or wrinkling of the QR code label on the body. Follow-up survey results (n = 33) showed that the surgical team agreed or strongly agreed that the QR program was clearer (70%), easier to use (57%), and more accurate (84%). Seventy-four percent preferred the QR system to the current time-out process. QR codes accurately transmit patient information during the time-out procedure and are preferred to the current process by surgical team members. The novel application of this technology may improve compliance, accuracy, and outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.
Platt, Kristen M; Charnigo, Richard J; Shertzer, Howard G; Pearson, Kevin J
2016-01-01
Exercise is an inexpensive intervention that may be used to reduce obesity and its consequences. In addition, many individuals who regularly exercise utilize dietary supplements to enhance their exercise routine and to accelerate fat loss or increase lean mass. Branched-chain amino acids (BCAAs) are a popular supplement and have been shown to produce a number of beneficial effects in rodent models and humans. Therefore, we hypothesized that BCAA supplementation would protect against high fat diet (HFD)-induced glucose intolerance and obesity in mice with and without access to exercise. We subjected 80 female C57BL/6 mice to a paradigm of HFD feeding, exercise in the form of voluntary wheel running, and BCAA supplementation in the drinking water for 16 weeks (n = 10 per group). Body weight was monitored weekly, while food and water consumption were recorded twice weekly. During the 5th, 10th, and 15th weeks of treatment, glucose tolerance and body composition were analyzed. Exercise significantly improved glucose tolerance in both control-fed and HFD-fed mice. BCAA supplementation, however, did not significantly alter glucose tolerance in any treatment group. While BCAA supplements did not improve lean to fat mass ratio in sedentary mice, it significantly augmented the effects of exercise on this parameter.
A Concept for Run-Time Support of the Chapel Language
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents a concept for run-time implementation of other concepts embodied in the Chapel programming language. (Now undergoing development, Chapel is intended to become a standard language for parallel computing that would surpass older such languages in both computational performance in the efficiency with which pre-existing code can be reused and new code written.) The aforementioned other concepts are those of distributions, domains, allocations, and access, as defined in a separate document called "A Semantic Framework for Domains and Distributions in Chapel" and linked to a language specification defined in another separate document called "Chapel Specification 0.3." The concept presented in the instant report is recognition that a data domain that was invented for Chapel offers a novel approach to distributing and processing data in a massively parallel environment. The concept is offered as a starting point for development of working descriptions of functions and data structures that would be necessary to implement interfaces to a compiler for transforming the aforementioned other concepts from their representations in Chapel source code to their run-time implementations.
Exercise economy in skiing and running
Losnegard, Thomas; Schäfer, Daniela; Hallén, Jostein
2014-01-01
Substantial inter-individual variations in exercise economy exist even in highly trained endurance athletes. The variation is believed to be determined partly by intrinsic factors. Therefore, in the present study, we compared exercise economy in V2-skating, double poling, and uphill running. Ten highly trained male cross-country skiers (23 ± 3 years, 180 ± 6 cm, 75 ± 8 kg, VO2peak running: 76.3 ± 5.6 mL·kg−1·min−1) participated in the study. Exercise economy and VO2peak during treadmill running, ski skating (V2 technique) and double poling were compared based on correlation analysis. There was a very large correlation in exercise economy between V2-skating and double poling (r = 0.81) and large correlations between V2-skating and running (r = 0.53) and double poling and running (r = 0.58). There were trivial to moderate correlations between exercise economy and the intrinsic factors VO2peak (r = 0.00–0.23), cycle rate (r = 0.03–0.46), body mass (r = −0.09–0.46) and body height (r = 0.11–0.36). In conclusion, the inter-individual variation in exercise economy could be explained only moderately by differences in VO2peak, body mass and body height. Apparently other intrinsic factors contribute to the variation in exercise economy between highly trained subjects. PMID:24478718
An Improved Neutron Transport Algorithm for HZETRN2006
NASA Astrophysics Data System (ADS)
Slaba, Tony
NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.
Meta-Analyses of the Effects of Habitual Running on Indices of Health in Physically Inactive Adults.
Hespanhol Junior, Luiz Carlos; Pillay, Julian David; van Mechelen, Willem; Verhagen, Evert
2015-10-01
In order to implement running to promote physical activity, it is essential to quantify the extent to which running improves health. The aim was to summarise the literature on the effects of endurance running on biomedical indices of health in physically inactive adults. Electronic searches were conducted in October 2014 on PubMed, Embase, CINAHL, SPORTDiscus, PEDro, the Cochrane Library and LILACS, with no limits of date and language of publication. Randomised controlled trials (with a minimum of 8 weeks of running training) that included physically inactive but healthy adults (18-65 years) were selected. The studies needed to compare intervention (i.e. endurance running) and control (i.e., no intervention) groups. Two authors evaluated study eligibility, extracted data, and assessed risk of bias; a third author resolved any uncertainties. Random-effects meta-analyses were performed to summarise the estimates for length of training and sex. A dose-response analysis was performed with random-effects meta-regression in order to investigate the relationship between running characteristics and effect sizes. After screening 22,380 records, 49 articles were included, of which 35 were used to combine data on ten biomedical indices of health. On average the running programs were composed of 3.7 ± 0.9 sessions/week, 2.3 ± 1.0 h/week, 14.4 ± 5.4 km/week, at 60-90% of the maximum heart rate, and lasted 21.5 ± 16.8 weeks. After 1 year of training, running was effective in reducing body mass by 3.3 kg [95% confidence interval (CI) 4.1-2.5], body fat by 2.7% (95% CI 5.1-0.2), resting heart rate by 6.7 min(-1) (95% CI 10.3-3.0) and triglycerides by 16.9 mg dl(-1) (95% CI 28.1-5.6). Also, running significantly increased maximal oxygen uptake (VO2max) by 7.1 ml min(-1) kg(-1) (95% CI 5.0-9.1) and high-density lipoprotein (HDL) cholesterol by 3.3 mg dl(-1) (95% CI 1.2-5.4). No significant effect was found for lean body mass, body mass index, total cholesterol and low-density lipoprotein cholesterol after 1 year of training. In the dose-response analysis, larger effect sizes were found for longer length of training. It was only possible to combine the data of ten out the 161 outcome measures identified. Lack of information on training characteristics precluded a multivariate model in the dose-response analysis. Endurance running was effective in providing substantial beneficial effects on body mass, body fat, resting heart rate, VO2max, triglycerides and HDL cholesterol in physically inactive adults. The longer the length of training, the larger the achieved health benefits. Clinicians and health authorities can use this information to advise individuals to run, and to support policies towards investing in running programs.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
Coordination Motor Skills of Military Pilots Subjected to Survival Training.
Tomczak, Andrzej
2015-09-01
Survival training of military pilots in the Polish Army gains significance because polish pilots have taken part in more and more military missions. Prolonged exercise of moderate intensity with restricted sleep or sleep deprivation is known to deteriorate performance. The aim of the study was thus to determine the effects of a strenuous 36-hour exercise with restricted sleep on selected motor coordination and psychomotor indices. Thirteen military pilots aged 30-56 years were examined twice: pretraining and posttraining. The following tests were applied: running motor adjustment (15-m sprint, 3 × 5-m shuttle run, 15-m slalom, and 15-m squat), divided attention, dynamic body balance, handgrip strength differentiation. Survival training resulted in significant decreases in maximum handgrip strength (from 672 to 630 N), corrected 50% max handgrip (from 427 to 367 N), error 50% max (from 26 to 17%), 15-m sprint (from 5.01 to 4.64 m·s), and 15-m squat (2.20 to 1.98 m·s). The training improvements took place in divided attention test (from 48.2 to 57.2%). The survival training applied to pilots only moderately affected some of their motor adjustment skills, the divided attention, and dynamic body balance remaining unaffected or even improved. Further studies aimed at designing a set of tests for coordination motor skills and of soldiers' capacity to fight for survival under conditions of isolation are needed.
Mixed maximal and explosive strength training in recreational endurance runners.
Taipale, Ritva S; Mikkola, Jussi; Salo, Tiina; Hokka, Laura; Vesterinen, Ville; Kraemer, William J; Nummela, Ari; Häkkinen, Keijo
2014-03-01
Supervised periodized mixed maximal and explosive strength training added to endurance training in recreational endurance runners was examined during an 8-week intervention preceded by an 8-week preparatory strength training period. Thirty-four subjects (21-45 years) were divided into experimental groups: men (M, n = 9), women (W, n = 9), and control groups: men (MC, n = 7), women (WC, n = 9). The experimental groups performed mixed maximal and explosive exercises, whereas control subjects performed circuit training with body weight. Endurance training included running at an intensity below lactate threshold. Strength, power, endurance performance characteristics, and hormones were monitored throughout the study. Significance was set at p ≤ 0.05. Increases were observed in both experimental groups that were more systematic than in the control groups in explosive strength (12 and 13% in men and women, respectively), muscle activation, maximal strength (6 and 13%), and peak running speed (14.9 ± 1.2 to 15.6 ± 1.2 and 12.9 ± 0.9 to 13.5 ± 0.8 km Ł h). The control groups showed significant improvements in maximal and explosive strength, but Speak increased only in MC. Submaximal running characteristics (blood lactate and heart rate) improved in all groups. Serum hormones fluctuated significantly in men (testosterone) and in women (thyroid stimulating hormone) but returned to baseline by the end of the study. Mixed strength training combined with endurance training may be more effective than circuit training in recreational endurance runners to benefit overall fitness that may be important for other adaptive processes and larger training loads associated with, e.g., marathon training.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
Numerical solutions of 3-dimensional Navier-Stokes equations for closed bluff-bodies
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1985-01-01
The Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallelepiped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDc VPS 32 computer. The codes are written in 32-bit (half word) FORTRAN, which provides an approximate factor of two decreasing in computational time and doubles the memory size compared to the 54-bit word size.
Skinny Is Not Enough: A Content Analysis of Fitspiration on Pinterest.
Simpson, Courtney C; Mazzeo, Suzanne E
2017-05-01
Fitspiration is a relatively new social media trend nominally intended to promote health and fitness. Fitspiration messages are presented as encouraging; however, they might also engender body dissatisfaction and compulsive exercise. This study analyzed fitspiration content (n = 1050) on the image-based social media platform Pinterest. Independent raters coded the images and text present in the posts. Messages were categorized as appearance- or health-related, and coded for Social Cognitive Theory constructs: standards, behaviors, and outcome expectancies. Messages encouraged appearance-related body image standards and weight management behaviors more frequently than health-related standards and behaviors, and emphasized attractiveness as motivation to partake in such behaviors. Results also indicated that fitspiration messages include a comparable amount of fit praise (i.e., emphasis on toned/defined muscles) and thin praise (i.e., emphasis on slenderness), suggesting that women are not only supposed to be thin but also fit. Considering the negative outcomes associated with both exposure to idealized body images and exercising for appearance reasons, findings suggest that fitspiration messages are problematic, especially for viewers with high risk of eating disorders and related issues.
Geophysics of Small Planetary Bodies
NASA Technical Reports Server (NTRS)
Asphaug, Erik I.
1998-01-01
As a SETI Institute PI from 1996-1998, Erik Asphaug studied impact and tidal physics and other geophysical processes associated with small (low-gravity) planetary bodies. This work included: a numerical impact simulation linking basaltic achondrite meteorites to asteroid 4 Vesta (Asphaug 1997), which laid the groundwork for an ongoing study of Martian meteorite ejection; cratering and catastrophic evolution of small bodies (with implications for their internal structure; Asphaug et al. 1996); genesis of grooved and degraded terrains in response to impact; maturation of regolith (Asphaug et al. 1997a); and the variation of crater outcome with impact angle, speed, and target structure. Research of impacts into porous, layered and prefractured targets (Asphaug et al. 1997b, 1998a) showed how shape, rheology and structure dramatically affects sizes and velocities of ejecta, and the survivability and impact-modification of comets and asteroids (Asphaug et al. 1998a). As an affiliate of the Galileo SSI Team, the PI studied problems related to cratering, tectonics, and regolith evolution, including an estimate of the impactor flux around Jupiter and the effect of impact on local and regional tectonics (Asphaug et al. 1998b). Other research included tidal breakup modeling (Asphaug and Benz 1996; Schenk et al. 1996), which is leading to a general understanding of the role of tides in planetesimal evolution. As a Guest Computational Investigator for NASA's BPCC/ESS supercomputer testbed, helped graft SPH3D onto an existing tree code tuned for the massively parallel Cray T3E (Olson and Asphaug, in preparation), obtaining a factor xIO00 speedup in code execution time (on 512 cpus). Runs which once took months are now completed in hours.
Influence of short-term unweighing and reloading on running kinetics and muscle activity.
Sainton, Patrick; Nicol, Caroline; Cabri, Jan; Barthelemy-Montfort, Joëlle; Berton, Eric; Chavet, Pascale
2015-05-01
In running, body weight reduction is reported to result in decreased lower limb muscle activity with no change in the global activation pattern (Liebenberg et al. in J Sports Sci 29:207-214). Our study examined the acute effects on running mechanics and lower limb muscle activity of short-term unweighing and reloading conditions while running on a treadmill with a lower body positive pressure (LBPP) device. Eleven healthy males performed two randomized running series of 9 min at preferred speed. Each series included three successive running conditions of 3 min [at 100 % body weight (BW), 60 or 80 % BW, and 100 % BW]. Vertical ground reaction force and center of mass accelerations were analyzed together with surface EMG activity recorded from six major muscles of the left lower limb for the first and last 30 s of each running condition. Effort sensation and mean heart rate were also recorded. In both running series, the unloaded running pattern was characterized by a lower step frequency (due to increased flight time with no change in contact time), lower impact and active force peaks, and also by reduced loading rate and push-off impulse. Amplitude of muscle activity overall decreased, but pre-contact and braking phase extensor muscle activity did not change, whereas it was reduced during the subsequent push-off phase. The combined neuro-mechanical changes suggest that LBPP technology provides runners with an efficient support during the stride. The after-effects recorded after reloading highlight the fact that 3 min of unweighing may be sufficient for updating the running pattern.
Multitasking kernel for the C and Fortran programming languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
Program MAMO: Models for avian management optimization-user guide
Guillaumet, Alban; Paxton, Eben H.
2017-01-01
The following chapters describe the structure and code of MAMO, and walk the reader through running the different components of the program with sample data. This manual should be used alongside a computer running R, so that the reader can copy and paste code into R, observe the output, and follow along interactively. Taken together, chapters 2–4 will allow the user to replicate a simulation study investigating the consequences of climate change and two potential management actions on the population dynamics of a vulnerable and iconic Hawaiian forest bird, the ‘I‘iwi (Drepanis coccinea; hereafter IIWI).
Modified Laser and Thermos cell calculations on microcomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.
1987-01-01
In the course of designing and operating nuclear reactors, many fuel pin cell calculations are required to obtain homogenized cell cross sections as a function of burnup. In the interest of convenience and cost, it would be very desirable to be able to make such calculations on microcomputers. In addition, such a microcomputer code would be very helpful for educational course work in reactor computations. To establish the feasibility of making detailed cell calculations on a microcomputer, a mainframe cell code was compiled and run on a microcomputer. The computer code Laser, originally written in Fortran IV for the IBM-7090more » class of mainframe computers, is a cylindrical, one-dimensional, multigroup lattice cell program that includes burnup. It is based on the MUFT code for epithermal and fast group calculations, and Thermos for the thermal calculations. There are 50 fast and epithermal groups and 35 thermal groups. Resonances are calculated assuming a homogeneous system and then corrected for self-shielding, Dancoff, and Doppler by self-shielding factors. The Laser code was converted to run on a microcomputer. In addition, the Thermos portion of Laser was extracted and compiled separately to have available a stand alone thermal code.« less
Biomechanics of Distance Running.
ERIC Educational Resources Information Center
Cavanagh, Peter R., Ed.
Contributions from researchers in the field of running mechanics are included in the 13 chapters of this book. The following topics are covered: (1) "The Mechanics of Distance Running: A Historical Perspective" (Peter Cavanagh); (2) "Stride Length in Distance Running: Velocity, Body Dimensions, and Added Mass Effects" (Peter Cavanagh, Rodger…
A Research Code to Study Solutions of the Boundary Layer Equations in Body Conformal Coordinates
1991-05-01
1991. Denis Bergeron 91-07405 May 1991 Approved ko pubhic reeae; l’ LsesFu Un.hii- ted DEFENCE RESEARCH ESTABLISHMENT SUFFIELD, RALSTON, ALBERTA PEP...his results in boundary layer coordinates u and n’ which are defined as + [ , ur = (6-4) U U’t UNCLASSIFIED UNCLASSIFIED 46 -5.0 . . . All cases wun
Lima, R A; Pfeiffer, K A; Bugge, A; Møller, N C; Andersen, L B; Stodden, D F
2017-12-01
We investigated the longitudinal associations among physical activity (PA), motor competence (MC), cardiorespiratory fitness (VO 2peak ), and body fatness across 7 years, and also analyzed the possible mediation effects of PA, MC, and VO 2peak on the relationships with body fatness. This was a seven-year longitudinal study with three measuring points (mean ages [in years] and respective sample size: 6.75±0.37, n=696; 9.59±1.07, n=617; 13.35±0.34, n=513). PA (moderate-to-vigorous PA-MVPA and vigorous PA-VPA) was monitored using accelerometers. MC was assessed by the "Körperkoordinationstest für Kinder-KTK" test battery. VO 2peak was evaluated using a continuous running protocol until exhaustion. Body fatness was determined by the sum of four skinfolds. Structural equation modeling was performed to evaluate the longitudinal associations among PA, MC, VO 2peak, and body fatness and the potential mediation effects of PA, MC, and VO 2peak . All coefficients presented were standardized (z-scores). MC and VO 2peak directly influenced the development of body fatness, and VO 2peak mediated the associations between MVPA, VPA, MC, and body fatness. MC also mediated the associations between MVPA, VPA, and body fatness. In addition, VO 2peak had the largest total association with body fatness (β=-0.431; P<.05), followed by MC (β=-0.369; P<.05) and VPA (β=-0.112; P<.05). As PA, MC, and VO 2peak exhibited longitudinal association with body fatness, it seems logical that interventions should strive to promote the development of fitness and MC through developmentally appropriate physical activities, as the synergistic interactions of all three variables impacted body fatness. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Johnston, Rich D; Gabbett, Tim J; Jenkins, David G; Speranza, Michael J
2016-04-01
To assess the impact of different repeated-high-intensity-effort (RHIE) bouts on player activity profiles, skill involvements, and neuromuscular fatigue during small-sided games. 22 semiprofessional rugby league players (age 24.0 ± 1.8 y, body mass 95.6 ± 7.4 kg). During 4 testing sessions, they performed RHIE bouts that each differed in the combination of contact and running efforts, followed by a 5-min off-side small-sided game before performing a second bout of RHIE activity and another 5-min small-sided game. Global positioning system microtechnology and video recordings provided information on activity profiles and skill involvements. A countermovement jump and a plyometric push-up assessed changes in lower- and upper-body neuromuscular function after each session. After running-dominant RHIE bouts, players maintained running intensities during both games. In the contact-dominant RHIE bouts, reductions in moderate-speed activity were observed from game 1 to game 2 (ES = -0.71 to -1.06). There was also moderately lower disposal efficiency across both games after contact-dominant RHIE activity compared with running-dominant activity (ES = 0.62-1.02). Greater reductions in lower-body fatigue occurred as RHIE bouts became more running dominant (ES = -0.01 to -1.36), whereas upper-body fatigue increased as RHIE bouts became more contact dominant (ES = -0.07 to -1.55). Physical contact causes reductions in running intensity and the quality of skill involvements during game-based activities. In addition, the neuromuscular fatigue experienced by players is specific to the activities performed.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Henneberg, M.F.; Strause, J.L.
2002-01-01
This report presents the instructions required to use the Scour Critical Bridge Indicator (SCBI) Code and Scour Assessment Rating (SAR) calculator developed by the Pennsylvania Department of Transportation (PennDOT) and the U.S. Geological Survey to identify Pennsylvania bridges with excessive scour conditions or a high potential for scour. Use of the calculator will enable PennDOT bridge personnel to quickly calculate these scour indices if site conditions change, new bridges are constructed, or new information needs to be included. Both indices are calculated for a bridge simultaneously because they must be used together to be interpreted accurately. The SCBI Code and SAR calculator program is run by a World Wide Web browser from a remote computer. The user can 1) add additional scenarios for bridges in the SCBI Code and SAR calculator database or 2) enter data for new bridges and run the program to calculate the SCBI Code and calculate the SAR. The calculator program allows the user to print the results and to save multiple scenarios for a bridge.
A Secure and Robust Approach to Software Tamper Resistance
NASA Astrophysics Data System (ADS)
Ghosh, Sudeep; Hiser, Jason D.; Davidson, Jack W.
Software tamper-resistance mechanisms have increasingly assumed significance as a technique to prevent unintended uses of software. Closely related to anti-tampering techniques are obfuscation techniques, which make code difficult to understand or analyze and therefore, challenging to modify meaningfully. This paper describes a secure and robust approach to software tamper resistance and obfuscation using process-level virtualization. The proposed techniques involve novel uses of software check summing guards and encryption to protect an application. In particular, a virtual machine (VM) is assembled with the application at software build time such that the application cannot run without the VM. The VM provides just-in-time decryption of the program and dynamism for the application's code. The application's code is used to protect the VM to ensure a level of circular protection. Finally, to prevent the attacker from obtaining an analyzable snapshot of the code, the VM periodically discards all decrypted code. We describe a prototype implementation of these techniques and evaluate the run-time performance of applications using our system. We also discuss how our system provides stronger protection against tampering attacks than previously described tamper-resistance approaches.
The seasonal-cycle climate model
NASA Technical Reports Server (NTRS)
Marx, L.; Randall, D. A.
1981-01-01
The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.
Chen, Wei-Han; Wu, Huey-June; Lo, Shin-Liang; Chen, Hui; Yang, Wen-Wen; Huang, Chen-Fu; Liu, Chiang
2018-05-28
Chen, WH, Wu, HJ, Lo, SL, Chen, H, Yang, WW, Huang, CF, and Liu, C. Eight-week battle rope training improves multiple physical fitness dimensions and shooting accuracy in collegiate basketball players. J Strength Cond Res XX(X): 000-000, 2018-Basketball players must possess optimally developed physical fitness in multiple dimensions and shooting accuracy. This study investigated whether (battle rope [BR]) training enhances multiple physical fitness dimensions, including aerobic capacity (AC), upper-body anaerobic power (AnP), upper-body and lower-body power, agility, and core muscle endurance, and shooting accuracy in basketball players and compared its effects with those of regular training (shuttle run [SR]). Thirty male collegiate basketball players were randomly assigned to the BR or SR groups (n = 15 per group). Both groups received 8-week interval training for 3 sessions per week; the protocol consisted of the same number of sets, exercise time, and rest interval time. The BR group exhibited significant improvements in AC (Progressive Aerobic Cardiovascular Endurance Run laps: 17.6%), upper-body AnP (mean power: 7.3%), upper-body power (basketball chest pass speed: 4.8%), lower-body power (jump height: 2.6%), core muscle endurance (flexion: 37.0%, extension: 22.8%, and right side bridge: 23.0%), and shooting accuracy (free throw: 14.0% and dynamic shooting: 36.2%). However, the SR group exhibited improvements in only AC (12.0%) and upper-body power (3.8%) (p < 0.05). The BR group demonstrated larger pre-post improvements in upper-body AnP (fatigue index) and dynamic shooting accuracy than the SR group did (p < 0.05). The BR group showed higher post-training performance in upper-body AnP (mean power and fatigue index) than the SR group did (p < 0.05). Thus, BR training effectively improves multiple physical fitness dimensions and shooting accuracy in collegiate basketball players.
Rhexifolia versus Rhexiifolia: Plant Nomenclature Run Amok?
R. Kasten Dumroese; Mark W. Skinner
2005-01-01
The International Botanical Congress governs plant nomenclature worldwide through the International Code of Botanical Nomenclature. In the current code are very specific procedures for naming plants with novel compound epithets, and correcting compound epithets, like rhexifolia, that were incorrectly combined.We discuss why rhexiifolia...
Measurer’s Handbook: U.S. Army Anthropometric Survey, 1987-1988
1988-05-04
clothing, equipment, and systems properly accommodate Army personnel who run the body-size gamut from small women to large men . 20. DISTRIBUTION I...will form the basis for ensuring that Army clothing, equipment, and s ystems properly accommodate Army personnel who run the body-size gamut from s...interes ting men and women whose jobs in the Army run the gamut from armorers to pedi atricians. Many will be interested in you and your job. Most of the
NASA Astrophysics Data System (ADS)
Plummer, M.; Armour, E. A. G.; Todd, A. C.; Franklin, C. P.; Cooper, J. N.
2009-12-01
We present a program used to calculate intricate three-particle integrals for variational calculations of solutions to the leptonic Schrödinger equation with two nuclear centres in which inter-leptonic distances (electron-electron and positron-electron) are included directly in the trial functions. The program has been used so far in calculations of He-H¯ interactions and positron H 2 scattering, however the precisely defined integrals are applicable to other situations. We include a summary discussion of how the program has been optimized from a 'legacy'-type code to a more modern high-performance code with a performance improvement factor of up to 1000. Program summaryProgram title: tripleint.cc Catalogue identifier: AEEV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 829 No. of bytes in distributed program, including test data, etc.: 91 798 Distribution format: tar.gz Programming language: Fortran 95 (fixed format) Computer: Modern PC (tested on AMD processor) [1], IBM Power5 [2] Cray XT4 [3], similar Operating system: Red Hat Linux [1], IBM AIX [2], UNICOS [3] Has the code been vectorized or parallelized?: Serial (multi-core shared memory may be needed for some large jobs) RAM: Dependent on parameter sizes and option to use intermediate I/O. Estimates for practical use: 0.5-2 GBytes (with intermediate I/O); 1-4 GBytes (all-memory: the preferred option). Classification: 2.4, 2.6, 2.7, 2.9, 16.5, 16.10, 20 Nature of problem: The 'tripleint.cc' code evaluates three-particle integrals needed in certain variational (in particular: Rayleigh-Ritz and generalized-Kohn) matrix elements for solution of the Schrödinger equation with two fixed centres (the solutions may then be used in subsequent dynamic nuclear calculations). Specifically the integrals are defined by Eq. (16) in the main text and contain terms proportional to r×r/r,i≠j,i≠k,j≠k, with r the distance between leptons i and j. The article also briefly describes the performance optimizations used to increase the speed of evaluation of the integrals enough to allow detailed testing and mapping of the effect of varying non-linear parameters in the variational trial functions. Solution method: Each integral is solved using prolate spheroidal coordinates and series expansions (with cut-offs) of the many-lepton expressions. 1-d integrals and sub-integrals are solved analytically by various means (the program automatically chooses the most accurate of the available methods for each set of parameters and function arguments), while two of the three integrations over the prolate spheroidal coordinates ' λ' are carried out numerically. Many similar integrals with identical non-linear variational parameters may be calculated with one call of the code. Restrictions: There are limits to the number of points for the numerical integrations, to the cut-off variable itaumax for the many-lepton series expansions, and to the maximum powers of Slater-like input functions. For runs near the limit of the cut-off variable and with certain small-magnitude values of variational non-linear parameters, the code can require large amounts of memory (an option using some intermediate I/O is included to offset this). Unusual features: In addition to the program, we also present a summary description of the techniques and ideology used to optimize the code, together with accuracy tests and indications of performance improvement. Running time: The test runs take 1-15 minutes on HPCx [2] as indicated in Section 5 of the main text. A practical run with 729 integrals, 40 quadrature points per dimension and itaumax = 8 took 150 minutes on a PC (e.g., [1]): a similar run with 'medium' accuracy, e.g. for parameter optimization (see Section 2 of the main text), with 30 points per dimension and itaumax = 6 took 35 minutes. References:PC: Memory: 2.72 GB, CPU: AMD Opteron 246 dual-core, 2×2 GHz, OS: GNU/Linux, kernel: Linux 2.6.9-34.0.2.ELsmp. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/ (visited May 2009). HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/ (visited May 2009).
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
MIP- MULTIMISSION INTERACTIVE PICTURE PLANNING PROGRAM
NASA Technical Reports Server (NTRS)
Callahan, J. D.
1994-01-01
The Multimission Interactive Picture Planner, MIP, is a scientifically accurate and fast, 3D animation program for deep space. MIP is also versatile, reasonably comprehensive, portable, and will run on microcomputers. New techniques were developed to rapidly perform the calculations and transformations necessary to animate scientifically accurate 3D space. At the same time, portability is maintained, as the transformations and clipping have been written in FORTRAN 77 code. MIP was primarily designed to handle Voyager, Galileo, and the Space Telescope. It can, however, be adapted to handle other missions. The space simulation consists of a rotating body (usually a planet), any natural satellites, a spacecraft, the sun, stars, descriptive labelling, and field of view boxes. The central body and natural satellites are tri-axial wireframe representations with terminators, limbs, and landmarks. Hidden lines are removed for the central body and natural satellites, but not for the scene as a whole so that bodies may be seen behind one another. The program has considerable flexibility in its step time, observer position, viewed object, field of view, etc. Most parameters may be changed from the keyboard while the simulation is running. When MIP is executed it will ask the user for a control file, which should be prepared before execution. The control file identifies which mission MIP should simulate, the star catalog files, the ephemerides files to be used, the central body, planets, asteroids, and comets, and solar system landmarks and constants such as planets, asteroids, and comets. The control file also describes the fields of view. Control files are included to simulate the Voyager 1 encounter at Jupiter and the Giotto spacecraft's flyby of Halley's comet. Data is included for Voyager 1 and 2 (all 6 planetary encounters) and Giotto. MIP was written for an IBM PC or compatibles. It requires 512K of RAM, a CGA or compatible graphics adapter, and DOS 2.0 or higher. Users must supply their own graphics primitives to clear the screen, change the color, and connect 2D points with straight lines. Also, the users must tie in the graphics primitives along with their ephemeris readers. (MIP does everything else including clipping.) MIP was developed in 1988.
Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph
2014-01-01
A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509
A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation
NASA Astrophysics Data System (ADS)
Cho, Min Hyung; Cai, Wei
2010-12-01
A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.
Creation of a Rapid High-Fidelity Aerodynamics Module for a Multidisciplinary Design Environment
NASA Technical Reports Server (NTRS)
Srinivasan, Muktha; Whittecar, William; Edwards, Stephen; Mavris, Dimitri N.
2012-01-01
In the traditional aerospace vehicle design process, each successive design phase is accompanied by an increment in the modeling fidelity of the disciplinary analyses being performed. This trend follows a corresponding shrinking of the design space as more and more design decisions are locked in. The correlated increase in knowledge about the design and decrease in design freedom occurs partly because increases in modeling fidelity are usually accompanied by significant increases in the computational expense of performing the analyses. When running high fidelity analyses, it is not usually feasible to explore a large number of variations, and so design space exploration is reserved for conceptual design, and higher fidelity analyses are run only once a specific point design has been selected to carry forward. The designs produced by this traditional process have been recognized as being limited by the uncertainty that is present early on due to the use of lower fidelity analyses. For example, uncertainty in aerodynamics predictions produces uncertainty in trajectory optimization, which can impact overall vehicle sizing. This effect can become more significant when trajectories are being shaped by active constraints. For example, if an optimal trajectory is running up against a normal load factor constraint, inaccuracies in the aerodynamic coefficient predictions can cause a feasible trajectory to be considered infeasible, or vice versa. For this reason, a trade must always be performed between the desired fidelity and the resources available. Apart from this trade between fidelity and computational expense, it is very desirable to use higher fidelity analyses earlier in the design process. A large body of work has been performed to this end, led by efforts in the area of surrogate modeling. In surrogate modeling, an up-front investment is made by running a high fidelity code over a Design of Experiments (DOE); once completed, the DOE data is used to create a surrogate model, which captures the relationships between input variables and responses into regression equations. Depending on the dimensionality of the problem and the fidelity of the code for which a surrogate model is being created, the initial DOE can itself be computationally prohibitive to run. Cokriging, a modeling approach from the field of geostatistics, provides a desirable compromise between computational expense and fidelity. To do this, cokriging leverages a large body of data generated by a low fidelity analysis, combines it with a smaller set of data from a higher fidelity analysis, and creates a kriging surrogate model with prediction fidelity approaching that of the higher fidelity analysis. When integrated into a multidisciplinary environment, a disciplinary analysis module employing cokriging can raise the analysis fidelity without drastically impacting the expense of design iterations. This is demonstrated through the creation of an aerodynamics analysis module in NASA s OpenMDAO framework. Aerodynamic analyses including Missile DATCOM, APAS, and USM3D are leveraged to create high fidelity aerodynamics decks for parametric vehicle geometries, which are created in NASA s Vehicle Sketch Pad (VSP). Several trade studies are performed to examine the achieved level of model fidelity, and the overall impact to vehicle design is quantified.
Effects of Combined Stellar Feedback on Star Formation in Stellar Clusters
NASA Astrophysics Data System (ADS)
Wall, Joshua Edward; McMillan, Stephen; Pellegrino, Andrew; Mac Low, Mordecai; Klessen, Ralf; Portegies Zwart, Simon
2018-01-01
We present results of hybrid MHD+N-body simulations of star cluster formation and evolution including self consistent feedback from the stars in the form of radiation, winds, and supernovae from all stars more massive than 7 solar masses. The MHD is modeled with the adaptive mesh refinement code FLASH, while the N-body computations are done with a direct algorithm. Radiation is modeled using ray tracing along long characteristics in directions distributed using the HEALPIX algorithm, and causes ionization and momentum deposition, while winds and supernova conserve momentum and energy during injection. Stellar evolution is followed using power-law fits to evolution models in SeBa. We use a gravity bridge within the AMUSE framework to couple the N-body dynamics of the stars to the gas dynamics in FLASH. Feedback from the massive stars alters the structure of young clusters as gas ejection occurs. We diagnose this behavior by distinguishing between fractal distribution and central clustering using a Q parameter computed from the minimum spanning tree of each model cluster. Global effects of feedback in our simulations will also be discussed.
Relativistic N-body simulations with massive neutrinos
NASA Astrophysics Data System (ADS)
Adamek, Julian; Durrer, Ruth; Kunz, Martin
2017-11-01
Some of the dark matter in the Universe is made up of massive neutrinos. Their impact on the formation of large scale structure can be used to determine their absolute mass scale from cosmology, but to this end accurate numerical simulations have to be developed. Due to their relativistic nature, neutrinos pose additional challenges when one tries to include them in N-body simulations that are traditionally based on Newtonian physics. Here we present the first numerical study of massive neutrinos that uses a fully relativistic approach. Our N-body code, gevolution, is based on a weak-field formulation of general relativity that naturally provides a self-consistent framework for relativistic particle species. This allows us to model neutrinos from first principles, without invoking any ad-hoc recipes. Our simulation suite comprises some of the largest neutrino simulations performed to date. We study the effect of massive neutrinos on the nonlinear power spectra and the halo mass function, focusing on the interesting mass range between 0.06 eV and 0.3 eV and including a case for an inverted mass hierarchy.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Running Pace Decrease during a Marathon Is Positively Related to Blood Markers of Muscle Damage
Del Coso, Juan; Fernández, David; Abián-Vicen, Javier; Salinero, Juan José; González-Millán, Cristina; Areces, Francisco; Ruiz, Diana; Gallo, César; Calleja-González, Julio; Pérez-González, Benito
2013-01-01
Background Completing a marathon is one of the most challenging sports activities, yet the source of running fatigue during this event is not completely understood. The aim of this investigation was to determine the cause(s) of running fatigue during a marathon in warm weather. Methodology/Principal Findings We recruited 40 amateur runners (34 men and 6 women) for the study. Before the race, body core temperature, body mass, leg muscle power output during a countermovement jump, and blood samples were obtained. During the marathon (27 °C; 27% relative humidity) running fatigue was measured as the pace reduction from the first 5-km to the end of the race. Within 3 min after the marathon, the same pre-exercise variables were obtained. Results Marathoners reduced their running pace from 3.5 ± 0.4 m/s after 5-km to 2.9 ± 0.6 m/s at the end of the race (P<0.05), although the running fatigue experienced by the marathoners was uneven. Marathoners with greater running fatigue (> 15% pace reduction) had elevated post-race myoglobin (1318 ± 1411 v 623 ± 391 µg L−1; P<0.05), lactate dehydrogenase (687 ± 151 v 583 ± 117 U L−1; P<0.05), and creatine kinase (564 ± 469 v 363 ± 158 U L−1; P = 0.07) in comparison with marathoners that preserved their running pace reasonably well throughout the race. However, they did not differ in their body mass change (−3.1 ± 1.0 v −3.0 ± 1.0%; P = 0.60) or post-race body temperature (38.7 ± 0.7 v 38.9 ± 0.9 °C; P = 0.35). Conclusions/Significance Running pace decline during a marathon was positively related with muscle breakdown blood markers. To elucidate if muscle damage during a marathon is related to mechanistic or metabolic factors requires further investigation. PMID:23460881
Anthropometrics and Body Composition in East African Runners: Potential Impact on Performance.
Mooses, Martin; Hackney, Anthony C
2017-04-01
Maximal oxygen uptake (V̇O 2 max), fractional utilization of V̇O 2 max during running, and running economy (RE) are crucial factors for running success for all endurance athletes. Although evidence is limited, investigations of these key factors indicate that East Africans' superiority in distance running is largely due to a unique combination of these factors. East African runners appear to have a very high level of RE most likely associated, at least partly, with anthropometric characteristics rather than with any specific metabolic property of the working muscle. That is, evidence suggest that anthropometrics and body composition might have important parameters as determinants of superior performance of East African distance runners. Regrettably, this role is often overlooked and mentioned as a descriptive parameter rather than an explanatory parameter in many research studies. This brief review article provides an overview of the evidence to support the critical role anthropometrics and body composition has on the distance running success of East African athletes. The structural form and shape of these athletes also has a downside, because having very low BMI or body fat increases the risk for relative energy deficiency in sport (RED-S) conditions in both male and female runners, which can have serious health consequences.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma
NASA Astrophysics Data System (ADS)
Kawazura, Y.; Barnes, M.
2018-05-01
This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
Endocannabinoids Measurement in Human Saliva as Potential Biomarker of Obesity
Tabarin, Antoine; Clark, Samantha; Leste-Lasserre, Thierry; Marsicano, Giovanni; Piazza, Pier Vincenzo; Cota, Daniela
2012-01-01
Background The discovery of the endocannabinoid system and of its role in the regulation of energy balance has significantly advanced our understanding of the physiopathological mechanisms leading to obesity and type 2 diabetes. New knowledge on the role of this system in humans has been acquired by measuring blood endocannabinoids. Here we explored endocannabinoids and related N-acylethanolamines in saliva and verified their changes in relation to body weight status and in response to a meal or to body weight loss. Methodology/Principal Findings Fasting plasma and salivary endocannabinoids and N-acylethanolamines were measured through liquid mass spectrometry in 12 normal weight and 12 obese, insulin-resistant subjects. Salivary endocannabinoids and N-acylethanolamines were evaluated in the same cohort before and after the consumption of a meal. Changes in salivary endocannabinoids and N-acylethanolamines after body weight loss were investigated in a second group of 12 obese subjects following a 12-weeks lifestyle intervention program. The levels of mRNAs coding for enzymes regulating the metabolism of endocannabinoids, N-acylethanolamines and of cannabinoid type 1 (CB1) receptor, alongside endocannabinoids and N-acylethanolamines content, were assessed in human salivary glands. The endocannabinoids 2-arachidonoylglycerol (2-AG), N-arachidonoylethanolamide (anandamide, AEA), and the N-acylethanolamines (oleoylethanolamide, OEA and palmitoylethanolamide, PEA) were quantifiable in saliva and their levels were significantly higher in obese than in normal weight subjects. Fasting salivary AEA and OEA directly correlated with BMI, waist circumference and fasting insulin. Salivary endocannabinoids and N-acylethanolamines did not change in response to a meal. CB1 receptors, ligands and enzymes were expressed in the salivary glands. Finally, a body weight loss of 5.3% obtained after a 12-weeks lifestyle program significantly decreased salivary AEA levels. Conclusions/Significance Endocannabinoids and N-acylethanolamines are quantifiable in saliva and their levels correlate with obesity but not with feeding status. Body weight loss significantly decreases salivary AEA, which might represent a useful biomarker in obesity. PMID:22860123
Scapular insertion of the rabbit latissimus dorsi muscle: gross anatomy and fibre-type composition.
Barron, D J; Etherington, P J; Winlove, C P; Pepper, J R
2001-01-01
This paper defines the characteristics and significance of the scapular insertion of the latissimus dorsi muscle (LDM) of the rabbit. In a study of the New Zealand White species (n = 10) the scapular insertion was found to be a consistent anatomical feature of the LDM that made up 12.3% (+/-2.3) of the total muscle weight. The fibres arise from the medial aspect of the body of the LDM and run in a caudocranial direction to be inserted into a broad, thin tendon beneath the scapula ridge. This is morphologically different from the scapular component of the human LDM which is a well-recognized but inconsistent feature and consists of no more than a small leash of fibres running around the lower pole of the scapula. The scapular insertion was deeper red in colour than the body of the muscle and fibre-typing demonstrated a mean slow-fibre composition of 49% (+/-2.6) compared to 16% (+/-1.7) for the body of the muscle (p < 0.01). Mapping of the fibre types throughout the remainder of the LDM confirmed that the body of the muscle was of fast phenotype but with significantly more slow fibres in the superomedial segment of the muscle than elsewhere. This region of the muscle contributes mainly to the scapular insertion and it is proposed that this part of the muscle takes on a predominantly postural role in stabilising the scapula during movement of the forelimb.
Knowledge Data Base for Amorphous Metals
2007-07-26
not programmatic, updates. Over 100 custom SQL statements that maintain the domain specific data are attached to the workflow entries in a generic...for the form by populating the SQL and run generation tables. Application data may be prepared in different ways for two steps that invoke the same form...run generation mode). There is a single table of SQL commands. Each record has a user-definable ID, the SQL code, and a comment. The run generation
TABULATED EQUIVALENT SDR FLAMELET (TESF) MODEFL
DOE Office of Scientific and Technical Information (OSTI.GOV)
KUNDU, PRITHWISH; AMEEN, mUHSIN MOHAMMED; UNNIKRISHNAN, UMESH
The code consists of an implementation of a novel tabulated combustion model for non-premixed flames in CFD solvers. This novel technique/model is used to implement an unsteady flamelet tabulation without using progress variables for non-premixed flames. It also has the capability to include history effects which is unique within tabulated flamelet models. The flamelet table generation code can be run in parallel to generate tables with large chemistry mechanisms in relatively short wall clock times. The combustion model/code reads these tables. This framework can be coupled with any CFD solver with RANS as well as LES turbulence models. This frameworkmore » enables CFD solvers to run large chemistry mechanisms with large number of grids at relatively lower computational costs. Currently it has been coupled with the Converge CFD code and validated against available experimental data. This model can be used to simulate non-premixed combustion in a variety of applications like reciprocating engines, gas turbines and industrial burners operating over a wide range of fuels.« less
Plato: A localised orbital based density functional theory code
NASA Astrophysics Data System (ADS)
Kenny, S. D.; Horsfield, A. P.
2009-12-01
The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available. Can deal with arbitrary angular momenta for orbitals, whilst still retaining Slater-Koster tables for accuracy. Running time: Test cases will run in a few minutes, large calculations may run for several days.
Underworld - Bringing a Research Code to the Classroom
NASA Astrophysics Data System (ADS)
Moresi, L. N.; Mansour, J.; Giordani, J.; Farrington, R.; Kaluza, O.; Quenette, S.; Woodcock, R.; Squire, G.
2017-12-01
While there are many reasons to celebrate the passing of punch card programming and flickering green screens,the loss of the sense of wonder at the very existence of computers and the calculations they make possible shouldnot be numbered among them. Computers have become so familiar that students are often unaware that formal and careful design of algorithms andtheir implementations remains a valuable and important skill that has to be learned and practiced to achieveexpertise and genuine understanding. In teaching geodynamics and geophysics at undergraduate level, we aimed to be able to bring our researchtools into the classroom - even when those tools are advanced, parallel research codes that we typically deploy on hundredsor thousands of processors, and we wanted to teach not just the physical concepts that are modelled by these codes but asense of familiarity with computational modelling and the ability to discriminate a reliable model from a poor one. The underworld code (www.underworldcode.org) was developed for modelling plate-scale fluid mechanics and studyingproblems in lithosphere dynamics. Though specialised for this task, underworld has a straightforwardpython user interface that allows it to run within the environment of jupyter notebooks on a laptop (at modest resolution, of course).The python interface was developed for adaptability in addressing new research problems, but also lends itself to integration intoa python-driven learning environment. To manage the heavy demands of installing and running underworld in a teaching laboratory, we have developed a workflow in whichwe install docker containers in the cloud which support a number of students to run their own environment independently. We share ourexperience blending notebooks and static webpages into a single web environment, and we explain how we designed our graphics andanalysis tools to allow notebook "scripts" to be queued and run on a supercomputer.
Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald
2012-01-01
Background Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Methods Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. Results After multivariate regression, running speed of the training units (β = −0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) − 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. Conclusion The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners. PMID:24198587
Magnetic dynamos in accreting planetary bodies
NASA Astrophysics Data System (ADS)
Golabek, G.; Labrosse, S.; Gerya, T.; Morishima, R.; Tackley, P. J.
2012-12-01
Laboratory measurements revealed ancient remanent magnetization in meteorites [1] indicating the activity of magnetic dynamos in the corresponding meteorite parent body. To study under which circumstances dynamo activity is possible, we use a new methodology to simulate the internal evolution of a planetary body during accretion and differentiation. Using the N-body code PKDGRAV [2] we simulate the accretion of planetary embryos from an initial annulus of several thousand planetesimals. The growth history of the largest resulting planetary embryo is used as an input for the thermomechanical 2D code I2ELVIS [3]. The thermomechanical model takes recent parametrizations of impact processes [4] and of the magnetic dynamo [5] into account. It was pointed out that impacts can not only deposit heat deep into the target body, which is later buried by ejecta of further impacts [6], but also that impacts expose in the crater region originally deep-seated layers, thus cooling the interior [7]. This combination of impact effects becomes even more important when we consider that planetesimals of all masses contribute to planetary accretion. This leads occasionally to collisions between bodies with large ratios between impactor and target mass. Thus, all these processes can be expected to have a profound effect on the thermal evolution during the epoch of planetary accretion and may have implications for the magnetic dynamo activity. Results show that late-formed planetesimals do not experience silicate melting and avoid thermal alteration, whereas in early-formed bodies accretion and iron core growth occur almost simultaneously and a highly variable magnetic dynamo can operate in the interior of these bodies.
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.
NASA Technical Reports Server (NTRS)
1986-01-01
AGDISP, a computer code written for Langley by Continuum Dynamics, Inc., aids crop dusting airplanes in targeting pesticides. The code is commercially available and can be run on a personal computer by an inexperienced operator. Called SWA+H, it is used by the Forest Service, FAA, DuPont, etc. DuPont uses the code to "test" equipment on the computer using a laser system to measure particle characteristics of various spray compounds.
Validation of CFD/Heat Transfer Software for Turbine Blade Analysis
NASA Technical Reports Server (NTRS)
Kiefer, Walter D.
2004-01-01
I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.
Shipman, Carissa; Gosliner, Terrence
2015-06-16
The nudibranch family Dotidae has been an extremely challenging group to study taxonomically due to their small body size, lack of distinct internal morphological differences and similar color patterns. This integrative systematic study of the Dotidae encompasses 29 individuals from the north Atlantic and Mediterranean, and 11 from the Indo-Pacific. Two mitochondrial genes, 16S, COI, and a nuclear gene, H3, were sequenced for 31 specimens and Bayesian and RAxML concatenated analyses were run. Dotidae is monophyletic and possesses strong geographic structure. Co-evolution between some of the north Atlantic taxa and their hydroid prey is apparent, thus supporting the hypothesis that speciation may be correlated with prey diversification. This study also supports the notion that the hydroid prey is a reliable indicator for distinguishing between cryptic species. Doto coronata Gmelin, the type species for the genus Doto, is re-described and a neotype, collected near Goes, Netherlands, is designated. From the molecular data, D. millbayana, D. dunnei, D. koenneckeri, D. maculata Lemche within the Doto coronata species complex, are confirmed to be distinct from D. coronata. Based on molecular data, specimens previously identified as D. coronata from South Africa are determined to represent a new species. It is described here and named Doto africoronata n. sp. Kabeiro n. gen. is introduced for the clade of elongate individuals from the Indo-Pacific, which diverges by 11.6% or greater in 16S from short-bodied Doto species. These elongate species are sister to all the short-bodied species and possess an enlarged pericardium, elongate cerata, a reproductive system with a pocketed prostate (penial gland), and an external tube-like digestive gland, which are absent in short-bodied Doto. Species of Kabeiro described here are: Kabeiro christianae n. sp., Kabeiro rubroreticulata n. sp., and Kabeiro phasmida n. sp. from the Philippines. The Indo-Pacific short-bodied species, Doto greenamyeri n. sp. from Papua New Guinea is also described.
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Ross, Dan; Jin, J.-M.; Chatterjee, A.; Volakis, John L.
1991-01-01
Validated results are presented for the new 3D body of revolution finite element boundary integral code. A Fourier series expansion of the vector electric and mangnetic fields is employed to reduce the dimensionality of the system, and the exact boundary condition is employed to terminate the finite element mesh. The mesh termination boundary is chosen such that is leads to convolutional boundary operatores of low O(n) memory demand. Improvements of this code are discussed along with the proposed formulation for a full 3D implementation of the finite element boundary integral method in conjunction with a conjugate gradiant fast Fourier transformation (CGFFT) solution.
NASA Technical Reports Server (NTRS)
Cullimore, B.
1994-01-01
SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.
Physical fitness in children with developmental coordination disorder.
Schott, Nadja; Alof, Verena; Hultsch, Daniela; Meermann, Dagmar
2007-12-01
The protective effects of physical activity and fitness on cardiovascular health have clearly been shown among normally developed children. However data are currently lacking pertaining to children with developmental coordination disorder (DCD). The purpose of this study was to examine differences in fitness measures, body composition, and physical activity among children with and without DCD. A cross-sectional design was implemented examining 261 children (118 girls, 143 boys) ages 4-12 years (mean age 7.8 +/- 1.9 years). Children were categorized as having DCD if they scored less than or equal to the 5th percentile (n=71) or between the 6th and the 15th percentile (n=5) on the Movement Assessment Battery for Children (MABC; Henderson & Sugden, 1992). The typically developing children had scores between the 16th and the 50th percentile (n=16) or above the 50th percentile (n=3) on the MABC. The age-related body mass index was used to characterize body composition. Physical fitness was assessed with a 6-min run, 20-m sprint, jump-and-reach test, medicine ball throw, curl-ups, and sit-and-reach test. Physical activity was estimated with a questionnaire. The percentage of overweight and obese children ages 10-12 years were significantly higher in the DCD groups (severe: 50%, moderate: 23.1%) than in the typically developing groups (medium: 5.6%, high: 0%; p < .05). Significant interactions (MABC x Age Group) were found for the fitness tests (p values < .05), except flexibility; whereby specifically, compared to the children in the typically developing groups children in the DCD groups ages 4-6 years achieved significantly worse results for the 20-m sprint, and children of the DCD groups ages 10-12 years achieved significantly worse results for the 6-min run, jump-and-reach test, and medicine ball throw. The study demonstrates poorer performance in fitness tests with high demands on coordination in children with DCD compared to their typically developing peers. Furthermore, the differences in fitness increased with age between children in the DCD groups versus the typically developing groups.
Charania, N A; Tsuji, L J S
2011-01-01
First Nation communities were highly impacted by the 2009 H1N1 influenza pandemic. Multiple government bodies (ie federal, provincial, and First Nations) in Canada share responsibility for the health sector pandemic response in remote and isolated First Nation communities and this may have resulted in a fragmented pandemic response. This study aimed to discover if and how the dichotomy (or trichotomy) of involved government bodies led to barriers faced and opportunities for improvement during the health sector response to the 2009 H1N1 pandemic in three remote and isolated sub-arctic First Nation communities of northern Ontario, Canada. A qualitative community-based participatory approach was employed. Semi-directed interviews were conducted with adult key informants (n=13) using purposive sampling of participants representing the two (or three) government bodies of each study community. Data were manually transcribed and coded using deductive and inductive thematic analysis to reveal positive aspects, barriers faced, and opportunities for improvement along with the similarities and differences regarding the pandemic responses of each government body. Primary barriers faced by participants included receiving contradicting governmental guidelines and direction from many sources. In addition, there was a lack of human resources, information sharing, and specific details included in community-level pandemic plans. Recommended areas of improvement include developing a complementary communication plan, increasing human resources, and updating community-level pandemic plans. Participants reported many issues that may be attributable to the dichotomy (or trichotomy) of government bodies responsible for healthcare delivery during a pandemic. Increasing formal communication and collaboration between responsible government bodies will assist in clarifying roles and responsibilities and improve the pandemic response in Canada's remote and isolated First Nation communities.
A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Wyss, Gregory Dane
2004-07-01
This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less
Comparison of SPHC Hydrocode Results with Penetration Equations and Results of Other Codes
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Stallworth, Roderick; Stellingwerf, Robert F.
2004-01-01
The SPHC hydrodynamic code was used to simulate impacts of spherical aluminum projectiles on a single-wall aluminum plate and on a generic Whipple shield. Simulations were carried out in two and three dimensions. Projectile speeds ranged from 2 kilometers per second to 10 kilometers per second for the single-wall runs, and from 3 kilometers per second to 40 kilometers per second for the Whipple shield runs. Spallation limit results of the single-wall simulations are compared with predictions from five standard penetration equations, and are shown to fall comfortably within the envelope of these analytical relations. Ballistic limit results of the Whipple shield simulations are compared with results from the AUTODYN-2D and PAM-SHOCK-3D codes presented in a paper at the Hypervelocity Impact Symposium 2000 and the Christiansen formulation of 2003.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Zetterqvist, Anna V; Merlo, Juan; Mulinari, Shai
2015-02-01
In many European countries, medicines promotion is governed by voluntary codes of practice administered by the pharmaceutical industry under its own system of self-regulation. Involvement of industry organizations in policing promotion has been proposed to deter illicit conduct, but few detailed studies on self-regulation have been carried out to date. The objective of this study was to examine the evidence for promotion and self-regulation in the UK and Sweden, two countries frequently cited as examples of effective self-regulation. We performed a qualitative content analysis of documents outlining the constitutions and procedures of these two systems. We also gathered data from self-regulatory bodies on complaints, complainants, and rulings for the period 2004-2012. The qualitative analysis revealed similarities and differences between the countries. For example, self-regulatory bodies in both countries are required to actively monitor promotional items and impose sanctions on violating companies, but the range of sanctions is greater in the UK where companies may, for instance, be audited or publicly reprimanded. In total, Swedish and UK bodies ruled that 536 and 597 cases, respectively, were in breach, equating to an average of more than one case/week for each country. In Sweden, 430 (47%) complaints resulted from active monitoring, compared with only two complaints (0.2%) in the UK. In both countries, a majority of violations concerned misleading promotion. Charges incurred on companies averaged €447,000 and €765,000 per year in Sweden and the UK, respectively, equivalent to about 0.014% and 0.0051% of annual sales revenues, respectively. One hundred cases in the UK (17% of total cases in breach) and 101 (19%) in Sweden were highlighted as particularly serious. A total of 46 companies were ruled in breach of code for a serious offence at least once in the two countries combined (n = 36 in the UK; n = 27 in Sweden); seven companies were in serious violation more than ten times each. A qualitative content analysis of serious violations pertaining to diabetes drugs (UK, n = 15; Sweden, n = 6; 10% of serious violations) and urologics (UK, n = 6; Sweden, n = 13; 9%) revealed various types of violations: misleading claims (n = 23; 58%); failure to comply with undertakings (n = 9; 23%); pre-licensing (n = 7; 18%) or off-label promotion (n = 2; 5%); and promotion of prescription drugs to the public (n = 6; 15%). Violations that go undetected or unpunished by self-regulatory bodies are the main limitation of this study, since they are likely to lead to an underestimate of industry misconduct. The prevalence and severity of breaches testifies to a discrepancy between the ethical standard codified in industry Codes of Conduct and the actual conduct of the industry. We discuss regulatory reforms that may improve the quality of medicines information, such as pre-vetting and intensified active monitoring of promotion, along with larger fines, and giving greater publicity to rulings. But despite the importance of improving regulatory arrangements in an attempt to ensure unbiased medicines information, such efforts alone are insufficient because simply improving oversight and increasing penalties fail to address additional layers of industry bias.
Zetterqvist, Anna V.; Merlo, Juan; Mulinari, Shai
2015-01-01
Background In many European countries, medicines promotion is governed by voluntary codes of practice administered by the pharmaceutical industry under its own system of self-regulation. Involvement of industry organizations in policing promotion has been proposed to deter illicit conduct, but few detailed studies on self-regulation have been carried out to date. The objective of this study was to examine the evidence for promotion and self-regulation in the UK and Sweden, two countries frequently cited as examples of effective self-regulation. Methods and Findings We performed a qualitative content analysis of documents outlining the constitutions and procedures of these two systems. We also gathered data from self-regulatory bodies on complaints, complainants, and rulings for the period 2004–2012. The qualitative analysis revealed similarities and differences between the countries. For example, self-regulatory bodies in both countries are required to actively monitor promotional items and impose sanctions on violating companies, but the range of sanctions is greater in the UK where companies may, for instance, be audited or publicly reprimanded. In total, Swedish and UK bodies ruled that 536 and 597 cases, respectively, were in breach, equating to an average of more than one case/week for each country. In Sweden, 430 (47%) complaints resulted from active monitoring, compared with only two complaints (0.2%) in the UK. In both countries, a majority of violations concerned misleading promotion. Charges incurred on companies averaged €447,000 and €765,000 per year in Sweden and the UK, respectively, equivalent to about 0.014% and 0.0051% of annual sales revenues, respectively. One hundred cases in the UK (17% of total cases in breach) and 101 (19%) in Sweden were highlighted as particularly serious. A total of 46 companies were ruled in breach of code for a serious offence at least once in the two countries combined (n = 36 in the UK; n = 27 in Sweden); seven companies were in serious violation more than ten times each. A qualitative content analysis of serious violations pertaining to diabetes drugs (UK, n = 15; Sweden, n = 6; 10% of serious violations) and urologics (UK, n = 6; Sweden, n = 13; 9%) revealed various types of violations: misleading claims (n = 23; 58%); failure to comply with undertakings (n = 9; 23%); pre-licensing (n = 7; 18%) or off-label promotion (n = 2; 5%); and promotion of prescription drugs to the public (n = 6; 15%). Violations that go undetected or unpunished by self-regulatory bodies are the main limitation of this study, since they are likely to lead to an underestimate of industry misconduct. Conclusions The prevalence and severity of breaches testifies to a discrepancy between the ethical standard codified in industry Codes of Conduct and the actual conduct of the industry. We discuss regulatory reforms that may improve the quality of medicines information, such as pre-vetting and intensified active monitoring of promotion, along with larger fines, and giving greater publicity to rulings. But despite the importance of improving regulatory arrangements in an attempt to ensure unbiased medicines information, such efforts alone are insufficient because simply improving oversight and increasing penalties fail to address additional layers of industry bias. PMID:25689460
Modeling of Aerodynamic Force Acting in Tunnel for Analysis of Riding Comfort in a Train
NASA Astrophysics Data System (ADS)
Kikko, Satoshi; Tanifuji, Katsuya; Sakanoue, Kei; Nanba, Kouichiro
In this paper, we aimed to model the aerodynamic force that acts on a train running at high speed in a tunnel. An analytical model of the aerodynamic force is developed from pressure data measured on car-body sides of a test train running at the maximum revenue operation speed. The simulation of an 8-car train running while being subjected to the modeled aerodynamic force gives the following results. The simulated car-body vibration corresponds to the actual vibration both qualitatively and quantitatively for the cars at the rear of the train. The separation of the airflow at the tail-end of the train increases the yawing vibration of the tail-end car while it has little effect on the car-body vibration of the adjoining car. Also, the effect of the moving velocity of the aerodynamic force on the car-body vibration is clarified that the simulation under the assumption of a stationary aerodynamic force can markedly increase the car-body vibration.
GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac
2017-03-01
The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.
Automated Detection and Analysis of Interplanetary Shocks Running Real-Time on the Web
NASA Astrophysics Data System (ADS)
Vorotnikov, V.; Smith, C. W.; Hu, Q.; Szabo, A.; Skoug, R. M.; Cohen, C. M.; Davis, A. J.
2008-05-01
The ACE real-time data stream provides web-based now-casting capabilities for solar wind conditions upstream of Earth. We have built a fully automated code that finds and analyzes interplanetary shocks as they occur and posts their solutions on the Web for possible real-time application to space weather nowcasting. Shock analysis algorithms based on the Rankine-Hugoniot jump conditions exist and are in wide-spread use today for the interactive analysis of interplanetary shocks yielding parameters such as shock speed and propagation direction and shock strength in the form of compression ratios. At a previous meeting we reported on efforts to develop a fully automated code that used ACE Level-2 (science quality) data to prove the applicability and correctness of the code and the associated shock-finder. We have since adapted the code to run ACE RTSW data provided by NOAA. This data lacks the full 3-dimensional velocity vector for the solar wind and contains only a single component wind speed. We show that by assuming the wind velocity to be radial strong shock solutions remain essentially unchanged and the analysis performs as well as it would if 3-D velocity components were available. This is due, at least in part, to the fact that strong shocks tend to have nearly radial shock normals and it is the strong shocks that are most effective in space weather applications. Strong shocks are the only shocks that concern us in this application. The code is now running on the Web and the results are available to all.
Lamb, Mary K; Innes, Kerry; Saad, Patricia; Rust, Julie; Dimitropoulos, Vera; Cumerlato, Megan
The Performance Indicators for Coding Quality (PICQ) is a data quality assessment tool developed by Australia's National Centre for Classification in Health (NCCH). PICQ consists of a number of indicators covering all ICD-10-AM disease chapters, some procedure chapters from the Australian Classification of Health Intervention (ACHI) and some Australian Coding Standards (ACS). The indicators can be used to assess the coding quality of hospital morbidity data by monitoring compliance of coding conventions and ACS; this enables the identification of particular records that may be incorrectly coded, thus providing a measure of data quality. There are 31 obstetric indicators available for the ICD-10-AM Fourth Edition. Twenty of these 31 indicators were classified as Fatal, nine as Warning and two Relative. These indicators were used to examine coding quality of obstetric records in the 2004-2005 financial year Australian national hospital morbidity dataset. Records with obstetric disease or procedure codes listed anywhere in the code string were extracted and exported from the SPSS source file. Data were then imported into a Microsoft Access database table as per PICQ instructions, and run against all Fatal and Warning and Relative (N=31) obstetric PICQ 2006 Fourth Edition Indicators v.5 for the ICD-10- AM Fourth Edition. There were 689,905 gynaecological and obstetric records in the 2004-2005 financial year, of which 1.14% were found to have triggered Fatal degree errors, 3.78% Warning degree errors and 8.35% Relative degree errors. The types of errors include completeness, redundancy, specificity and sequencing problems. It was found that PICQ is a useful initial screening tool for the assessment of ICD-10-AM/ACHI coding quality. The overall quality of codes assigned to obstetric records in the 2004- 2005 Australian national morbidity dataset is of fair quality.
Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization
NASA Astrophysics Data System (ADS)
Luo, Chuanfu; Sommer, Jens-Uwe
2009-08-01
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program, including test data, etc.: 940 798 No. of bytes in distributed program, including test data, etc.: 12 536 245 Distribution format: tar.gz Programming language: C++/MPI Computer: Tested on Intel-x86 and AMD64 architectures. Should run on any architecture providing a C++ compiler Operating system: Tested under Linux. Any other OS with C++ compiler and MPI library should suffice Has the code been vectorized or parallelized?: Yes RAM: Depends on system size and how many CPUs are used Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), FFTW ( http://www.fftw.org/) Nature of problem: Implementing special tabular angle potentials and Lennard-Jones-9-6 style interactions of a coarse grained polymer model for LAMMPS code. Solution method: Cubic spline interpolation of input tabulated angle potential data. Restrictions: The code is based on a former version of LAMMPS. Unusual features.: Any special angular potential can be used if it can be tabulated. Running time: Seconds to weeks, depending on system size, speed of CPU and how many CPUs are used. The test run provided with the package takes about 5 minutes on 4 AMD's opteron (2.6 GHz) CPUs. References:D. Reith, H. Meyer, F. Müller-Plathe, Macromolecules 34 (2001) 2335-2345. H. Meyer, F. Müller-Plathe, J. Chem. Phys. 115 (2001) 7807. H. Meyer, F. Müller-Plathe, Macromolecules 35 (2002) 1241-1252.
Factors affecting the energy cost of level running at submaximal speed.
Lacour, Jean-René; Bourdin, Muriel
2015-04-01
Metabolic measurement is still the criterion for investigation of the efficiency of mechanical work and for analysis of endurance performance in running. Metabolic demand may be expressed either as the energy spent per unit distance (energy cost of running, C r) or as energy demand at a given running speed (running economy). Systematic studies showed a range of costs of about 20 % between runners. Factors affecting C r include body dimensions: body mass and leg architecture, mostly calcaneal tuberosity length, responsible for 60-80 % of the variability. Children show a higher C r than adults. Higher resting metabolism and lower leg length/stature ratio are the main putative factors responsible for the difference. Elastic energy storage and reuse also contribute to the variability of C r. The increase in C r with increasing running speed due to increase in mechanical work is blunted till 6-7 m s(-1) by the increase in vertical stiffness and the decrease in ground contact time. Fatigue induced by prolonged or intense running is associated with up to 10 % increased C r; the contribution of metabolic and biomechanical factors remains unclear. Women show a C r similar to men of similar body mass, despite differences in gait pattern. The superiority of black African runners is presumably related to their leg architecture and better elastic energy storage and reuse.
Van Caekenberghe, Ine; Segers, Veerle; Aerts, Peter; Willems, Patrick; De Clercq, Dirk
2013-01-01
Literature shows that running on an accelerated motorized treadmill is mechanically different from accelerated running overground. Overground, the subject has to enlarge the net anterior–posterior force impulse proportional to acceleration in order to overcome linear whole body inertia, whereas on a treadmill, this force impulse remains zero, regardless of belt acceleration. Therefore, it can be expected that changes in kinematics and joint kinetics of the human body also are proportional to acceleration overground, whereas no changes according to belt acceleration are expected on a treadmill. This study documents kinematics and joint kinetics of accelerated running overground and running on an accelerated motorized treadmill belt for 10 young healthy subjects. When accelerating overground, ground reaction forces are characterized by less braking and more propulsion, generating a more forward-oriented ground reaction force vector and a more forwardly inclined body compared with steady-state running. This change in body orientation as such is partly responsible for the changed force direction. Besides this, more pronounced hip and knee flexion at initial contact, a larger hip extension velocity, smaller knee flexion velocity and smaller initial plantarflexion velocity are associated with less braking. A larger knee extension and plantarflexion velocity result in larger propulsion. Altogether, during stance, joint moments are not significantly influenced by acceleration overground. Therefore, we suggest that the overall behaviour of the musculoskeletal system (in terms of kinematics and joint moments) during acceleration at a certain speed remains essentially identical to steady-state running at the same speed, yet acting in a different orientation. However, because acceleration implies extra mechanical work to increase the running speed, muscular effort done (in terms of power output) must be larger. This is confirmed by larger joint power generation at the level of the hip and lower power absorption at the knee as the result of subtle differences in joint velocity. On a treadmill, ground reaction forces are not influenced by acceleration and, compared with overground, virtually no kinesiological adaptations to an accelerating belt are observed. Consequently, adaptations to acceleration during running differ from treadmill to overground and should be studied in the condition of interest. PMID:23676896
Xu, Chun; Silder, Amy; Zhang, Ju; Reifman, Jaques; Unnikrishnan, Ginu
2017-03-23
Load carriage is associated with musculoskeletal injuries, such as stress fractures, during military basic combat training. By investigating the influence of load carriage during exercises on the kinematics and kinetics of the body and on the biomechanical responses of bones, such as the tibia, we can quantify the role of load carriage on bone health. We conducted a cross-sectional study using an integrated musculoskeletal-finite-element model to analyze how the amount of load carriage in women affected the kinematics and kinetics of the body, as well as the tibial mechanical stress during running. We also compared the biomechanics of walking (studied previously) and running under various load-carriage conditions. We observed substantial changes in both hip kinematics and kinetics during running when subjects carried a load. Relative to those observed during running without load, the joint reaction forces at the hip increased by an average of 49.1% body weight when subjects carried a load that was 30% of their body weight (ankle, 4.8%; knee, 20.6%). These results indicate that the hip extensor muscles in women are the main power generators when running with load carriage. When comparing running with walking, finite element analysis revealed that the peak tibial stress during running (tension, 90.6 MPa; compression, 136.2 MPa) was more than three times as great as that during walking (tension, 24.1 MPa; compression, 40.3 MPa), whereas the cumulative stress within one stride did not differ substantially between running (15.2 MPa · s) and walking (13.6 MPa · s). Our findings highlight the critical role of hip extensor muscles and their potential injury in women when running with load carriage. More importantly, our results underscore the need to incorporate the cumulative effect of mechanical stress when evaluating injury risk under various exercise conditions. The results from our study help to elucidate the mechanisms of stress fracture in women.
Polat, Metin; Korkmaz Eryılmaz, Selcen; Aydoğan, Sami
2018-01-01
In order to ensure that athletes achieve their highest performance levels during competitive seasons, monitoring their long-term performance data is crucial for understanding the impact of ongoing training programs and evaluating training strategies. The present study was thus designed to investigate the variations in body composition, maximal oxygen uptake (VO 2max ), and gas exchange threshold values of cross-country skiers across training phases throughout a season. In total, 15 athletes who participate in international cross-country ski competitions voluntarily took part in this study. The athletes underwent incremental treadmill running tests at 3 different time points over a period of 1 year. The first measurements were obtained in July, during the first preparation period; the second measurements were obtained in October, during the second preparation period; and the third measurements were obtained in February, during the competition period. Body weight, body mass index (BMI), body fat (%), as well as VO 2max values and gas exchange threshold, measured using V-slope method during the incremental running tests, were assessed at all 3 time points. The collected data were analyzed using SPSS 20 package software. Significant differences between the measurements were assessed using Friedman's twoway variance analysis with a post hoc option. The athletes' body weights and BMI measurements at the third point were significantly lower compared with the results of the second measurement ( p <0.001). Moreover, the incremental running test time was significantly higher at the third measurement, compared with both the first ( p <0.05) and the second ( p <0.01) measurements. Similarly, the running speed during the test was significantly higher at the third measurement time point compared with the first measurement time point ( p <0.05). Body fat (%), time to reach the gas exchange threshold, running speed at the gas exchange threshold, VO 2max , amount of oxygen consumed at gas exchange threshold level (VO 2GET ), maximal heart rate (HR max ), and heart rate at gas exchange threshold level (HR GET ) values did not significantly differ between the measurement time points ( p >0.05). VO 2max and gas exchange threshold values recorded during the third measurements, the timing of which coincided with the competitive season of the cross-country skiers, did not significantly change, but their incremental running test time and running speed significantly increased while their body weight and BMI significantly decreased. These results indicate that the cross-country skiers developed a tolerance for high-intensity exercise and reached their highest level of athletic performance during the competitive season.
Training in Methods in Computational Neuroscience
1989-11-14
inferior colliculus served as inputs to a sheet of 100 cells within the medial geniculate body where combination sensitivity is first observed. Inputs from...course is for advanced graduate students and postdoctoral fellows in neurobiology , physics, electrical engineering, computer science and psychology...Research Code 1142BI 800 N. Quincy St Arlington, VA 22217-5000 Paul Adams Department of Neurobiology SUNY, Stony Brook Graduate Biology Building 576
Knechtle, Beat; Knechtle, Patrizia; Barandun, Ursula; Rosemann, Thomas
2011-05-01
The relationship between skin-fold thickness and running has been investigated in distances ranging from 100 m to the marathon distance (42.195 km), with the exclusion of the half-marathon distance (21.0975 km). We investigated the association between anthropometric variables, prerace experience, and training variables with race time in 42 recreational, nonprofessional, female half-marathon runners using bi- and multivariate analysis. Body weight (r, 0.60); body mass index (r, 0.48); body fat percentage (r, 0.56); pectoral (r, 0.61), mid-axilla (r, 0.69), triceps (r, 0.49), subscapular (r, 0.61), abdominal (r, 0.59), suprailiac (r, 0.55), and medial calf (r, 0.53) skin-fold thickness; mean speed of the training sessions (r, -0.68); and personal best time in a half-marathon (r, 0.69) correlated with race time after bivariate analysis. Body weight (P = 0.0054), pectoral skin-fold thickness (P = 0.0068), and mean speed of the training sessions (P = 0.0041) remained significant after multivariate analysis. Mean running speed during training was related to mid-axilla (r, -0.31), subscapular (r, -0.38), abdominal (r, -0.44), and suprailiac (r, -0.41) skin-fold thickness, the sum of 8 skin-fold thicknesses (r, -0.36); and percent body fat (r, -0.31). It was determined that variables of both anthropometry and training were related to half-marathon race time, and that skin-fold thicknesses were associated with running speed during training. For practical applications, high running speed during training (as opposed to extensive training) may both reduce upper-body skin-fold thicknesses and improve race performance in recreational female half-marathon runners.
Particle In Cell Codes on Highly Parallel Architectures
NASA Astrophysics Data System (ADS)
Tableman, Adam
2014-10-01
We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.
CSlib, a library to couple codes via Client/Server messaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plimpton, Steve
The CSlib is a small, portable library which enables two (or more) independent simulation codes to be coupled, by exchanging messages with each other. Both codes link to the library when they are built, and can them communicate with each other as they run. The messages contain data or instructions that the two codes send back-and-forth to each other. The messaging can take place via files, sockets, or MPI. The latter is a standard distributed-memory message-passing library.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
Bothe, Hermann; Tripp, H James; Zehr, Jonathan P
2010-10-01
Some unicellular N(2)-fixing cyanobacteria have recently been found to lack a functional photosystem II of photosynthesis. Such organisms, provisionally termed UCYN-A, of the oceanic picoplanktion are major contributors to the global marine N-input by N(2)-fixation. Since their photosystem II is inactive, they can perform N(2)-fixation during the day. UCYN-A organisms cannot be cultivated as yet. Their genomic analysis indicates that they lack genes coding for enzymes of the Calvin cycle, the tricarboxylic acid cycle and for the biosynthesis of several amino acids. The carbon source in the ocean that allows them to thrive in such high abundance has not been identified. Their genomic analysis implies that they metabolize organic carbon by a new mode of life. These unicellular N(2)-fixing cyanobacteria of the oceanic picoplankton are evolutionarily related to spheroid bodies present in diatoms of the family Epithemiaceae, such as Rhopalodia gibba. More recently, spheroid bodies were ultimately proven to be related to cyanobacteria and to express nitrogenase. They have been reported to be completely inactive in all photosynthetic reactions despite the presence of thylakoids. Sequence data show that R. gibba and its spheroid bodies are an evolutionarily young symbiosis that might serve as a model system to unravel early events in the evolution of chloroplasts. The cell metabolism of UCYN-A and the spheroid bodies may be related to that of the acetate photoassimilating green alga Chlamydobotrys.
2012-03-01
4 Body ...Final Report requirement. 5 Body The approved Statement of Work proposed the following timeline (Table 1): Table 1. Timeline for...prosthesis designs (Figure 1) were tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah ® (Ossur) and Nitro
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J
2008-01-01
The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
Earth Global Reference Atmospheric Model (GRAM99): Short Course
NASA Technical Reports Server (NTRS)
Leslie, Fred W.; Justus, C. G.
2007-01-01
Earth-GRAM is a FORTRAN software package that can run on a variety of platforms including PC's. For any time and location in the Earth's atmosphere, Earth-GRAM provides values of atmospheric quantities such as temperature, pressure, density, winds, constituents, etc.. Dispersions (perturbations) of these parameters are also provided and have realistic correlations, means, and variances - useful for Monte Carlo analysis. Earth-GRAM is driven by observations including a tropospheric database available from the National Climatic Data Center. Although Earth-GRAM can be run in a "stand-alone" mode, many users incorporate it into their trajectory codes. The source code is distributed free-of-charge to eligible recipients.
Injecting Artificial Memory Errors Into a Running Computer Program
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
2008-01-01
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Spatial application of WEPS for estimating wind erosion in the Pacific Northwest
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...
2006-12-01
The code was initially developed to be run within the netBeans IDE 5.04 running J2SE 5.0. During the course of the development, Eclipse SDK 3.2...covers the results from the research. Chapter V concludes and recommends future research. 4 netBeans
NASA Astrophysics Data System (ADS)
Boudreaux, R. D.; Metzger, C. E.; Macias, B. R.; Shirazi-Fard, Y.; Hogan, H. A.; Bloomfield, S. A.
2014-06-01
Astronauts on long duration missions continue to experience bone loss, as much as 1-2% each month, for up to 4.5 years after a mission. Mechanical loading of bone with exercise has been shown to increase bone formation, mass, and geometry. The aim of this study was to compare the efficacy of two exercise protocols during a period of reduced gravitational loading (1/6th body weight) in mice. Since muscle contractions via resistance exercise impart the largest physiological loads on the skeleton, we hypothesized that resistance training (via vertical tower climbing) would better protect against the deleterious musculoskeletal effects of reduced gravitational weight bearing when compared to endurance exercise (treadmill running). Young adult female BALB/cBYJ mice were randomly assigned to three groups: 1/6 g (G/6; n=6), 1/6 g with treadmill running (G/6+RUN; n=8), or 1/6 g with vertical tower climbing (G/6+CLB; n=9). Exercise was performed five times per week. Reduced weight bearing for 21 days was achieved through a novel harness suspension system. Treadmill velocity (12-20 m/min) and daily run time duration (32-51 min) increased incrementally throughout the study. Bone geometry and volumetric bone mineral density (vBMD) at proximal metaphysis and mid-diaphysis tibia were assessed by in vivo peripheral quantitative computed tomography (pQCT) on days 0 and 21 and standard dynamic histomorphometry was performed on undemineralized sections of the mid-diaphysis after tissue harvest. G/6 caused a significant decrease (P<0.001) in proximal tibia metaphysis total vBMD (-9.6%). These reductions of tibia metaphyseal vBMD in G/6 mice were mitigated in both G/6+RUN and G/6+CLB groups (P<0.05). After 21 days of G/6, we saw an absolute increase in tibia mid-diaphysis vBMD and in distal metaphysis femur vBMD in both G/6+RUN and G/6+CLB mice (P<0.05). Substantial increases in endocortical and periosteal mineralizing surface (MS/BS) at mid-diaphysis tibia in G/6+CLB demonstrate that bone formation can be increased even in the presence of reduced weight bearing. These data suggest that moderately vigorous endurance exercise and resistance training, through treadmill running or climb training mitigates decrements in vBMD during 21 days of reduced weight bearing. Consistent with our hypothesis, tower climb training, most pronounced in the tibia mid-diaphysis, provides a more potent osteogenic response compared to treadmill running.
Rings of non-spherical, axisymmetric bodies
NASA Astrophysics Data System (ADS)
Gupta, Akash; Nadkarni-Ghosh, Sharvari; Sharma, Ishan
2018-01-01
We investigate the dynamical behavior of rings around bodies whose shapes depart considerably from that of a sphere. To this end, we have developed a new self-gravitating discrete element N-body code, and employed a local simulation method to simulate a patch of the ring. The central body is modeled as a symmetric (oblate or prolate) ellipsoid, or defined through the characteristic frequencies (circular, vertical, epicyclic) that represent its gravitational field. Through our simulations we explore how a ring's behavior - characterized by dynamical properties like impact frequency, granular temperature, number density, vertical thickness and radial width - varies with the changing gravitational potential of the central body. We also contrast properties of rings about large central bodies (e.g. Saturn) with those of smaller ones (e.g. Chariklo). Finally, we investigate how the characteristic frequencies of a central body, restricted to being a solid of revolution with an equatorial plane of symmetry, affect the ring dynamics. The latter process may be employed to qualitatively understand the dynamics of rings about any symmetric solid of revolution.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
Accurate mass and velocity functions of dark matter haloes
NASA Astrophysics Data System (ADS)
Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly
2017-08-01
N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.
Improving the sensitivity and specificity of the abbreviated injury scale coding system.
Kramer, C F; Barancik, J I; Thode, H C
1990-01-01
The Abbreviated Injury Scale with Epidemiologic Modifications (AIS 85-EM) was developed to make it possible to code information about anatomic injury types and locations that, although generally available from medical records, is not codable under the standard Abbreviated Injury Scale, published by the American Association for Automotive Medicine in 1985 (AIS 85). In a population-based sample of 3,223 motor vehicle trauma cases, 68 percent of the patients had one or more injuries that were coded to the AIS 85 body region nonspecific category external. When the same patients' injuries were coded using the AIS 85-EM coding procedure, only 15 percent of the patients had injuries that could not be coded to a specific body region. With AIS 85-EM, the proportion of codable head injury cases increased from 16 percent to 37 percent, thereby improving the potential for identifying cases with head and threshold brain injury. The data suggest that body region coding of all injuries is necessary to draw valid and reliable conclusions about changes in injury patterns and their sequelae. The increased specificity of body region coding improves assessments of the efficacy of injury intervention strategies and countermeasure programs using epidemiologic methodology. PMID:2116633
Swimsuit issues: promoting positive body image in young women's magazines.
Boyd, Elizabeth Reid; Moncrieff-Boyd, Jessica
2011-08-01
This preliminary study reviews the promotion of healthy body image to young Australian women, following the 2009 introduction of the voluntary Industry Code of Conduct on Body Image. The Code includes using diverse sized models in magazines. A qualitative content analysis of the 2010 annual 'swimsuit issues' was conducted on 10 Australian young women's magazines. Pictorial and/or textual editorial evidence of promoting diverse body shapes and sizes was regarded as indicative of the magazines' upholding aspects of the voluntary Code of Conduct for Body Image. Diverse sized models were incorporated in four of the seven magazines with swimsuit features sampled. Body size differentials were presented as part of the swimsuit features in three of the magazines sampled. Tips for diverse body type enhancement were included in four of the magazines. All magazines met at least one criterion. One magazine displayed evidence of all three criteria. Preliminary examination suggests that more than half of young women's magazines are upholding elements of the voluntary Code of Conduct for Body Image, through representation of diverse-sized women in their swimsuit issues.
Biological and environmental determinants of 12-minute run performance in youth.
Freitas, Duarte; Maia, José; Stasinopoulos, Mikis; Gouveia, Élvio Rúbio; Antunes, António M; Thomis, Martine; Lefevre, Johan; Claessens, Albrecht; Hedeker, Donald; Malina, Robert M
2017-11-01
The 12-minute run is a commonly used indicator of cardiorespiratory fitness in youth. Variation in growth and maturity status as potential correlates of test performance has not been systematically addressed. To evaluate biological and environmental determinants of 12-minute run performance in Portuguese youth aged 7-17 years. Mixed-longitudinal samples of 187 boys and 142 girls were surveyed in 1996, 1997 and 1998. The 12-minute run was the indicator of cardiorespiratory fitness. Height, body mass and five skinfolds were measured and skeletal maturity was assessed. Physical activity, socioeconomic status and area of residence were obtained with a questionnaire. Multi-level modelling was used for the analysis. Chronological age and sum of five skinfolds were significant predictors of 12-minute run performance. Older boys and girls ran longer distances than younger peers, while high levels of subcutaneous fat were associated with shorter running distances. Rural boys were more proficient in the 12-minute run than urban peers. Skeletal maturity, height, body mass index, physical activity and socioeconomic status were not significant predictors of 12-minute run performances. Age and sum of skinfolds in both sexes and rural residence in boys are significant predictors of 12-minute run performance in Portuguese youth.
VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
2000-02-01
(1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).