Mo, Shiwei; Chow, Daniel H K
2018-05-19
Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems. PMID:25435870
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Application analysis is facilitated through a number of program profiling tools. The tools vary in their complexity, ease of deployment, design, and profiling detail. Specifically, understand- ing, analyzing, and optimizing is of particular importance for scientific applications where minor changes in code paths and data-structure layout can have profound effects. Understanding how intricate data-structures are accessed and how a given memory system responds is a complex task. In this paper we describe a trace profiling tool, Glprof, specifically aimed to lessen the burden of the programmer to pin-point heavily involved data-structures during an application's run-time, and understand data-structure run-time usage.more » Moreover, we showcase the tool's modularity using additional cache simulation components. We elaborate on the tool's design, and features. Finally we demonstrate the application of our tool in the context of Spec bench- marks using the Glprof profiler and two concurrently running cache simulators, PPC440 and AMD Interlagos.« less
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
Complex Event Recognition Architecture
NASA Technical Reports Server (NTRS)
Fitzgerald, William A.; Firby, R. James
2009-01-01
Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
2016-08-01
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
O'Malley, Kathleen G; Jacobson, Dave P; Kurth, Ryon; Dill, Allen J; Banks, Michael A
2013-01-01
Neutral genetic markers are routinely used to define distinct units within species that warrant discrete management. Human-induced changes to gene flow however may reduce the power of such an approach. We tested the efficiency of adaptive versus neutral genetic markers in differentiating temporally divergent migratory runs of Chinook salmon (Oncorhynchus tshawytscha) amid high gene flow owing to artificial propagation and habitat alteration. We compared seven putative migration timing genes to ten microsatellite loci in delineating three migratory groups of Chinook in the Feather River, CA: offspring of fall-run hatchery broodstock that returned as adults to freshwater in fall (fall run), spring-run offspring that returned in spring (spring run), and fall-run offspring that returned in spring (FRS). We found evidence for significant differentiation between the fall and federally listed threatened spring groups based on divergence at three circadian clock genes (OtsClock1b, OmyFbxw11, and Omy1009UW), but not neutral markers. We thus demonstrate the importance of genetic marker choice in resolving complex life history types. These findings directly impact conservation management strategies and add to previous evidence from Pacific and Atlantic salmon indicating that circadian clock genes influence migration timing. PMID:24478800
Transfer function of analog fiber-optic systems driven by Fabry-Perot lasers: comment
NASA Astrophysics Data System (ADS)
Gyula, Veszely
2006-10-01
A bad notation makes difficult the understanding of the paper of Capmany et al. [J. Opt. Soc. Am. B22, 2099 (2005)]. The reason is that the real time function and the complex time function run into one another.
Platform-Independence and Scheduling In a Multi-Threaded Real-Time Simulation
NASA Technical Reports Server (NTRS)
Sugden, Paul P.; Rau, Melissa A.; Kenney, P. Sean
2001-01-01
Aviation research often relies on real-time, pilot-in-the-loop flight simulation as a means to develop new flight software, flight hardware, or pilot procedures. Often these simulations become so complex that a single processor is incapable of performing the necessary computations within a fixed time-step. Threads are an elegant means to distribute the computational work-load when running on a symmetric multi-processor machine. However, programming with threads often requires operating system specific calls that reduce code portability and maintainability. While a multi-threaded simulation allows a significant increase in the simulation complexity, it also increases the workload of a simulation operator by requiring that the operator determine which models run on which thread. To address these concerns an object-oriented design was implemented in the NASA Langley Standard Real-Time Simulation in C++ (LaSRS++) application framework. The design provides a portable and maintainable means to use threads and also provides a mechanism to automatically load balance the simulation models.
Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.
2014-01-01
Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Stock-specific migration timing of adult spring-summer Chinook salmon in the Columbia River basin
Keefer, M.L.; Peery, C.A.; Jepson, M.A.; Tolotti, K.R.; Bjornn, T.C.; Stuehrenberg, L.C.
2004-01-01
An understanding of the migration timing patterns of Pacific salmon Oncorhynchus spp. and steelhead O. mykiss is important for managing complex mixed-stock fisheries and preserving genetic and life history diversity. We examined adult return timing for 3,317 radio-tagged fish from 38 stocks of Columbia River basin spring-summer Chinook salmon O. tshawytscha over 5 years. Stock composition varied widely within and between years depending on the strength of influential populations. Most individual stocks migrated at similar times each year relative to overall runs, supporting the hypotheses that run timing is predictable, is at least partially due to genetic adaptation, and can be used to differentiate between some conspecific populations. Arrival timing of both aggregated radio-tagged stocks and annual runs was strongly correlated with river discharge; stocks arrived earlier at Bonneville Dam and at upstream dams in years with low discharge. Migration timing analyses identified many between-stock and between-year differences in anadromous salmonid return behavior and should and managers interested in protection and recovery of evolutionary significant populations.
qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2008-10-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2009-02-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
NASA Astrophysics Data System (ADS)
Lin, J. W. B.
2015-12-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiledlanguages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionalityavailable with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Pythonto create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran, to optimize model performance but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
Factors That Influence Running Intensity in Interchange Players in Professional Rugby League.
Delaney, Jace A; Thornton, Heidi R; Duthie, Grant M; Dascombe, Ben J
2016-11-01
Rugby league coaches adopt replacement strategies for their interchange players to maximize running intensity; however, it is important to understand the factors that may influence match performance. To assess the independent factors affecting running intensity sustained by interchange players during professional rugby league. Global positioning system (GPS) data were collected from all interchanged players (starters and nonstarters) in a professional rugby league squad across 24 matches of a National Rugby League season. A multilevel mixed-model approach was employed to establish the effect of various technical (attacking and defensive involvements), temporal (bout duration, time in possession, etc), and situational (season phase, recovery cycle, etc) factors on the relative distance covered and average metabolic power (P met ) during competition. Significant effects were standardized using correlation coefficients, and the likelihood of the effect was described using magnitude-based inferences. Superior intermittent running ability resulted in very likely large increases in both relative distance and P met . As the length of a bout increased, both measures of running intensity exhibited a small decrease. There were at least likely small increases in running intensity for matches played after short recovery cycles and against strong opposition. During a bout, the number of collision-based involvements increased running intensity, whereas time in possession and ball time out of play decreased demands. These data demonstrate a complex interaction of individual- and match-based factors that require consideration when developing interchange strategies, and the manipulation of training loads during shorter recovery periods and against stronger opponents may be beneficial.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
Online Community Detection for Large Complex Networks
Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian
2014-01-01
Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683
Oral surgical handpiece use time parameters.
Roberts, Howard W; Cohen, Mark E; Murchison, David F
2005-07-01
To evaluate the clinical usage time parameters of handpieces used in oral surgical procedures. One hundred randomly selected clinical oral surgery exodontia procedures were timed to record lengths of continuous segments of both handpiece use and non-usage. Providers with experience ranging from general dentists to board certified oral surgeons were timed during surgical exodontia treatment involving 1 to 4 teeth of various complexities. Usage times were compared with manufacturers' recommendations that on times should not exceed 20 seconds in any 50-second interval (20/50 rule). Handpiece run time increased with the number of teeth and surgical case complexity (both P < .001) but was unrelated to operator experience (P = .763), in a 3-predictor model (R2 = 0.20; P < .001). Ninety-four of the 100 cases experienced at least 1 second in violation of the 20/50 rule and 42% of all run seconds were in violation. Clinicians should be aware of recommended handpiece duty use cycles. Manufacturers' recommendations about handpiece use time cycles do not reflect actual clinical usage. Under the conditions of this study, actual surgical handpiece use time was not correlated with user experience. Less experienced providers did require longer to complete treatment, but increased treatment times were due to time spent that did not require surgical handpiece use.
NASA Astrophysics Data System (ADS)
Randers, Jorgen; Golüke, Ulrich; Wenstøp, Fred; Wenstøp, Søren
2016-11-01
We have made a simple system dynamics model, ESCIMO (Earth System Climate Interpretable Model), which runs on a desktop computer in seconds and is able to reproduce the main output from more complex climate models. ESCIMO represents the main causal mechanisms at work in the Earth system and is able to reproduce the broad outline of climate history from 1850 to 2015. We have run many simulations with ESCIMO to 2100 and beyond. In this paper we present the effects of introducing in 2015 six possible global policy interventions that cost around USD 1000 billion per year - around 1 % of world GDP. We tentatively conclude (a) that these policy interventions can at most reduce the global mean surface temperature - GMST - by up to 0.5 °C in 2050 and up to 1.0 °C in 2100 relative to no intervention. The exception is injection of aerosols into the stratosphere, which can reduce the GMST by more than 1.0 °C in a decade but creates other serious problems. We also conclude (b) that relatively cheap human intervention can keep global warming in this century below +2 °C relative to preindustrial times. Finally, we conclude (c) that run-away warming is unlikely to occur in this century but is likely to occur in the longer run. The ensuing warming is slow, however. In ESCIMO, it takes several hundred years to lift the GMST to +3 °C above preindustrial times through gradual self-reinforcing melting of the permafrost. We call for research to test whether more complex climate models support our tentative conclusions from ESCIMO.
Knotty: Efficient and Accurate Prediction of Complex RNA Pseudoknot Structures.
Jabbari, Hosna; Wark, Ian; Montemagno, Carlo; Will, Sebastian
2018-06-01
The computational prediction of RNA secondary structure by free energy minimization has become an important tool in RNA research. However in practice, energy minimization is mostly limited to pseudoknot-free structures or rather simple pseudoknots, not covering many biologically important structures such as kissing hairpins. Algorithms capable of predicting sufficiently complex pseudoknots (for sequences of length n) used to have extreme complexities, e.g. Pknots (Rivas and Eddy, 1999) has O(n6) time and O(n4) space complexity. The algorithm CCJ (Chen et al., 2009) dramatically improves the asymptotic run time for predicting complex pseudoknots (handling almost all relevant pseudoknots, while being slightly less general than Pknots), but this came at the cost of large constant factors in space and time, which strongly limited its practical application (∼200 bases already require 256GB space). We present a CCJ-type algorithm, Knotty, that handles the same comprehensive pseudoknot class of structures as CCJ with improved space complexity of Θ(n3 + Z)-due to the applied technique of sparsification, the number of "candidates", Z, appears to grow significantly slower than n4 on our benchmark set (which include pseudoknotted RNAs up to 400 nucleotides). In terms of run time over this benchmark, Knotty clearly outperforms Pknots and the original CCJ implementation, CCJ 1.0; Knotty's space consumption fundamentally improves over CCJ 1.0, being on a par with the space-economic Pknots. By comparing to CCJ 2.0, our unsparsified Knotty variant, we demonstrate the isolated effect of sparsification. Moreover, Knotty employs the state-of-the-art energy model of "HotKnots DP09", which results in superior prediction accuracy over Pknots. Our software is available at https://github.com/HosnaJabbari/Knotty. will@tbi.unvie.ac.at. Supplementary data are available at Bioinformatics online.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.
Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf
2015-08-01
RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.
Software Accelerates Computing Time for Complex Math
NASA Technical Reports Server (NTRS)
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina
2015-01-01
The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.
Run-and-tumble-like motion of active colloids in viscoelastic media
NASA Astrophysics Data System (ADS)
Lozano, Celia; Ruben Gomez-Solano, Juan; Bechinger, Clemens
2018-01-01
Run-and-tumble motion is a prominent locomotion strategy employed by many living microorganisms. It is characterized by straight swimming intervals (runs), which are interrupted by sudden reorientation events (tumbles). In contrast, directional changes of synthetic microswimmers (active particles) are caused by rotational diffusion, which is superimposed with their translational motion and thus leads to rather continuous and slow particle reorientations. Here we demonstrate that active particles can also perform a swimming motion where translational and orientational changes are disentangled, similar to run-and-tumble. In our system, such motion is realized by a viscoelastic solvent and a periodic modulation of the self-propulsion velocity. Experimentally, this is achieved using light-activated Janus colloids, which are illuminated by a time-dependent laser field. We observe a strong enhancement of the effective translational and rotational motion when the modulation time is comparable to the relaxation time of the viscoelastic fluid. Our findings are explained by the relaxation of the elastic stress, which builds up during the self-propulsion, and is suddenly released when the activity is turned off. In addition to a better understanding of active motion in viscoelastic surroundings, our results may suggest novel steering strategies for synthetic microswimmers in complex environments.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Protein complex prediction for large protein protein interaction networks with the Core&Peel method.
Pellegrini, Marco; Baglioni, Miriam; Geraci, Filippo
2016-11-08
Biological networks play an increasingly important role in the exploration of functional modularity and cellular organization at a systemic level. Quite often the first tools used to analyze these networks are clustering algorithms. We concentrate here on the specific task of predicting protein complexes (PC) in large protein-protein interaction networks (PPIN). Currently, many state-of-the-art algorithms work well for networks of small or moderate size. However, their performance on much larger networks, which are becoming increasingly common in modern proteome-wise studies, needs to be re-assessed. We present a new fast algorithm for clustering large sparse networks: Core&Peel, which runs essentially in time and storage O(a(G)m+n) for a network G of n nodes and m arcs, where a(G) is the arboricity of G (which is roughly proportional to the maximum average degree of any induced subgraph in G). We evaluated Core&Peel on five PPI networks of large size and one of medium size from both yeast and homo sapiens, comparing its performance against those of ten state-of-the-art methods. We demonstrate that Core&Peel consistently outperforms the ten competitors in its ability to identify known protein complexes and in the functional coherence of its predictions. Our method is remarkably robust, being quite insensible to the injection of random interactions. Core&Peel is also empirically efficient attaining the second best running time over large networks among the tested algorithms. Our algorithm Core&Peel pushes forward the state-of the-art in PPIN clustering providing an algorithmic solution with polynomial running time that attains experimentally demonstrable good output quality and speed on challenging large real networks.
Nesting behavior of house mice (Mus domesticus) selected for increased wheel-running activity.
Carter, P A; Swallow, J G; Davis, S J; Garland, T
2000-03-01
Nest building was measured in "active" (housed with access to running wheels) and "sedentary" (without wheel access) mice (Mus domesticus) from four replicate lines selected for 10 generations for high voluntary wheel-running behavior, and from four randombred control lines. Based on previous studies of mice bidirectionally selected for thermoregulatory nest building, it was hypothesized that nest building would show a negative correlated response to selection on wheel-running. Such a response could constrain the evolution of high voluntary activity because nesting has also been shown to be positively genetically correlated with successful production of weaned pups. With wheel access, selected mice of both sexes built significantly smaller nests than did control mice. Without wheel access, selected females also built significantly smaller nests than did control females, but only when body mass was excluded from the statistical model, suggesting that body mass mediated this correlated response to selection. Total distance run and mean running speed on wheels was significantly higher in selected mice than in controls, but no differences in amount of time spent running were measured, indicating a complex cause of the response of nesting to selection for voluntary wheel running.
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
Holtkamp, Hannah U; Morrow, Stuart J; Kubanik, Mario; Hartinger, Christian G
2017-07-01
Run-by-run variations are very common in capillary electrophoretic (CE) separations and cause imprecision in both the migration times and the peak areas. This makes peak and kinetic trend identification difficult and error prone. With the aim to identify suitable standards for CE separations which are compatible with the common detectors UV, ESI-MS, and ICP-MS, the Co III complexes [Co(en) 3 ]Cl 3 , [Co(acac) 3 ] and K[Co(EDTA)] were evaluated as internal standards in the reaction of the anticancer drug cisplatin and guanosine 5'-monophosphate as an example of a classical biological inorganic chemistry experiment. These Co III chelate complexes were considered for their stability, accessibility, and the low detection limit for Co in ICP-MS. Furthermore, the Co III complexes are positively and negatively charged as well as neutral, allowing the detection in different areas of the electropherograms. The background electrolytes were chosen to cover a wide pH range. The compatibility to the separation conditions was dependent on the ligands attached to the Co III centers, with only the acetylacetonato (acac) complex being applicable in the pH range 2.8-9.0. Furthermore, because of being charge neutral, this compound could be used as an electroosmotic flow (EOF) marker. In general, employing Co complexes resulted in improved data sets, particularly with regard to the migration times and peak areas, which resulted, for example, in higher linear ranges for the quantification of cisplatin.
A falsely fat curvaton with an observable running of the spectral tilt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloso, Marco; Sorbo, Lorenzo; Tasinato, Gianmassimo, E-mail: peloso@physics.umn.edu, E-mail: sorbo@physics.umass.edu, E-mail: gianmassimo.tasinato@port.ac.uk
2014-06-01
In slow roll inflation, the running of the spectral tilt is generically proportional to the square of the deviation from scale invariance, α{sub s}∝(n{sub s}−1){sup 2}, and is therefore currently undetectable. We present a mechanism able to generate a much larger running within slow roll. The mechanism is based on a curvaton field with a large mass term, and a time evolving normalization. This may happen for instance to the angular direction of a complex field in presence of an evolving radial direction. At the price of a single tuning between the mass term and the rate of change ofmore » the normalization, the curvaton can be made effectively light at the CMB scales, giving a spectral tilt in agreement with observations. The lightness is not preserved at later times, resulting in a detectable running of the spectral tilt. This mechanism shows that fields with a large mass term do not necessarily decouple from the inflationary physics, and provides a new tool for model building in inflation.« less
Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser
NASA Astrophysics Data System (ADS)
Mehravar, S.; Norwood, R. A.; Peyghambarian, N.; Kieu, K.
2016-06-01
Dual-comb technique has enabled exciting applications in high resolution spectroscopy, precision distance measurements, and 3D imaging. Major advantages over traditional methods can be achieved with dual-comb technique. For example, dual-comb spectroscopy provides orders of magnitude improvement in acquisition speed over standard Fourier-transform spectroscopy while still preserving the high resolution capability. Wider adoption of the technique has, however, been hindered by the need for complex and expensive ultrafast laser systems. Here, we present a simple and robust dual-comb system that employs a free-running bidirectionally mode-locked fiber laser operating at telecommunication wavelength. Two femtosecond frequency combs (with a small difference in repetition rates) are generated from a single laser cavity to ensure mutual coherent properties and common noise cancellation. As the result, we have achieved real-time absorption spectroscopy measurements without the need for complex servo locking with accurate frequency referencing, and relatively high signal-to-noise ratio.
Hulme, Adam; Thompson, Jason; Nielsen, Rasmus Oestergaard; Read, Gemma J M; Salmon, Paul M
2018-06-18
There have been recent calls for the application of the complex systems approach in sports injury research. However, beyond theoretical description and static models of complexity, little progress has been made towards formalising this approach in way that is practical to sports injury scientists and clinicians. Therefore, our objective was to use a computational modelling method and develop a dynamic simulation in sports injury research. Agent-based modelling (ABM) was used to model the occurrence of sports injury in a synthetic athlete population. The ABM was developed based on sports injury causal frameworks and was applied in the context of distance running-related injury (RRI). Using the acute:chronic workload ratio (ACWR), we simulated the dynamic relationship between changes in weekly running distance and RRI through the manipulation of various 'athlete management tools'. The findings confirmed that building weekly running distances over time, even within the reported ACWR 'sweet spot', will eventually result in RRI as athletes reach and surpass their individual physical workload limits. Introducing training-related error into the simulation and the modelling of a 'hard ceiling' dynamic resulted in a higher RRI incidence proportion across the population at higher absolute workloads. The presented simulation offers a practical starting point to further apply more sophisticated computational models that can account for the complex nature of sports injury aetiology. Alongside traditional forms of scientific inquiry, the use of ABM and other simulation-based techniques could be considered as a complementary and alternative methodological approach in sports injury research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
A new supervised learning algorithm for spiking neurons.
Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming
2013-06-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Validating GPM-based Multi-satellite IMERG Products Over South Korea
NASA Astrophysics Data System (ADS)
Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.
2017-12-01
Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html
Focus on South Africa: Time Running Out.
ERIC Educational Resources Information Center
Bryan, Sam, Ed.
1983-01-01
These units of study and learning activities for use in secondary social studies classes will help students better understand the complex situation in South Africa and prepare them to make wise and effective decisions about U.S. policy toward South Africa in the crucial years ahead. (RM)
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.
Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad
2016-01-01
For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762
Improved Algorithms Speed It Up for Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazi, A
2005-09-20
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less
Algorithmic Complexity. Volume II.
1982-06-01
digital computers, this improvement will go unnoticed if only a few complex products are to be taken, however it can become increasingly important as...computed in the reverse order. If the products are formed moving from the top of the tree downward, and then the divisions are performed going from the...the reverse order, going up the tree. (r- a mod m means that r is the remainder when a is divided by M.) The overall running time of the algorithm is
An empirically derived figure of merit for the quality of overall task performance
NASA Technical Reports Server (NTRS)
Lemay, Moira
1989-01-01
The need to develop an operationally relevant figure of merit for the quality of performance of a complex system such as an aircraft cockpit stems from a hypothesized dissociation between measures of performance and those of workload. Performance can be measured in terms of time, errors, or a combination of these. In most tasks performed by expert operators, errors are relatively rare and often corrected in time to avoid consequences. Moreover, perfect performance is seldom necessary to accomplish a particular task. Moreover, how well an expert performs a complex task consisting of a series of discrete cognitive tasks superimposed on a continuous task, such as flying an aircraft, does not depend on how well each discrete task is performed, but on their smooth sequencing. This makes the amount of time spent on each subtask of paramount importance in measuring overall performance, since smooth sequencing requires a minimum amount of time spent on each task. Quality consists in getting tasks done within a crucial time interval while maintaining acceptable continuous task performance. Thus, a figure of merit for overall quality of performance should be primarily a measure of time to perform discrete subtasks combined with a measure of basic vehicle control. Thus, the proposed figure of merit requires doing a task analysis on a series of performance, or runs, of a particular task, listing each discrete task and its associated time, and calculating the mean and standard deviation of these times, along with the mean and standard deviation of tracking error for the whole task. A set of simulator data on 30 runs of a landing task was obtained and a figure of merit will be calculated for each run. The figure of merit will be compared for voice and data link, so that the impact of this technology on total crew performance (not just communication performance) can be assessed. The effect of data link communication on other cockpit tasks will also be considered.
NASA Astrophysics Data System (ADS)
Magaldi, Marcello G.; Haine, Thomas W. N.
2015-02-01
The cascade of dense waters of the Southeast Greenland shelf during summer 2003 is investigated with two very high-resolution (0.5-km) simulations. The first simulation is non-hydrostatic. The second simulation is hydrostatic and about 3.75 times less expensive. Both simulations are compared to a 2-km hydrostatic run, about 31 times less expensive than the 0.5 km non-hydrostatic case. Time-averaged volume transport values for deep waters are insensitive to the changes in horizontal resolution and vertical momentum dynamics. By this metric, both lateral stirring and vertical shear instabilities associated with the cascading process are accurately parameterized by the turbulent schemes used at 2-km horizontal resolution. All runs compare well with observations and confirm that the cascade is mainly driven by cyclones which are linked to dense overflow boluses at depth. The passage of the cyclones is also associated with the generation of internal gravity waves (IGWs) near the shelf. Surface fields and kinetic energy spectra do not differ significantly between the runs for horizontal scales L > 30 km. Complex structures emerge and the spectra flatten at scales L < 30 km in the 0.5-km runs. In the non-hydrostatic case, additional energy is found in the vertical kinetic energy spectra at depth in the 2 km < L < 10 km range and with frequencies around 7 times the inertial frequency. This enhancement is missing in both hydrostatic runs and is here argued to be due to the different IGW evolution and propagation offshore. The different IGW behavior in the non-hydrostatic case has strong implications for the energetics: compared to the 2-km case, the baroclinic conversion term and vertical kinetic energy are about 1.4 and at least 34 times larger, respectively. This indicates that the energy transfer from the geostrophic eddy field to IGWs and their propagation away from the continental slope is not properly represented in the hydrostatic runs.
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Compact, high-speed algorithm for laying out printed circuit board runs
NASA Astrophysics Data System (ADS)
Zapolotskiy, D. Y.
1985-09-01
A high speed printed circuit connection layout algorithm is described which was developed within the framework of an interactive system for designing two-sided printed circuit broads. For this reason, algorithm speed was considered, a priori, as a requirement equally as important as the inherent demand for minimizing circuit run lengths and the number of junction openings. This resulted from the fact that, in order to provide psychological man/machine compatibility in the design process, real-time dialog during the layout phase is possible only within limited time frames (on the order of several seconds) for each circuit run. The work was carried out for use on an ARM-R automated work site complex based on an SM-4 minicomputer with a 32K-word memory. This limited memory capacity heightened the demand for algorithm speed and also tightened data file structure and size requirements. The layout algorithm's design logic is analyzed. The structure and organization of the data files are described.
DiSilvestro, Robert A; Hart, Staci; Marshall, Trisha; Joseph, Elizabeth; Reau, Alyssa; Swain, Carmen B; Diehl, Jason
2017-01-01
Certain essential and conditionally essential nutrients (CENs) perform functions involved in aerobic exercise performance. However, increased intake of such nutrient combinations has not actually been shown to improve such performance. For 1 mo, aerobically fit, young adult women took either a combination of 3 mineral glycinate complexes (daily dose: 36 mg iron, 15 mg zinc, and 2 mg copper) + 2 CENs (daily dose: 2 g carnitine and 400 mg phosphatidylserine), or the same combination with generic mineral complexes, or placebo ( n = 14/group). In Trial 1, before and after 1 mo, subjects were tested for 3 mile run time (primary outcome), followed by distance covered in 25 min on a stationary bike (secondary outcome), followed by a 90 s step test (secondary outcome). To test reproducibility of the run results, and to examine a lower dose of carnitine, a second trial was done. New subjects took either mineral glycinates + CENs (1 g carnitine) or placebo ( n = 17/group); subjects were tested for pre- and post-treatment 3 mile run time (primary outcome). In Trial 1, the mineral glycinates + CENs decreased 3 mile run time (25.6 ± 2.4 vs 26.5 ± 2.3 min, p < 0.05, paired t-test) increased stationary bike distance after 25 min (6.5 ± 0.6 vs 6.0 ± 0.8 miles, p < 0.05, paired t-test), and increased steps in the step test (43.8 ± 4.8 vs 40.3 ± 6.4 steps, p < 0.05, paired t-test). The placebo significantly affected only the biking distance, but it was less than for the glycinates-CENs treatment (0.2 ± 0.4. vs 0.5 ± 0.1 miles, p < 0.05, ANOVA + Tukey). The generic minerals + CENs only significantly affected the step test (44.1 ± 5.2 vs 41.0 ± 5.9 steps, p < 0.05, paired t-test) In Trial 2, 3 mile run time was decreased for the mineral glycinates + CENs (23.9 ± 3.1 vs 24.7 ± 2.5, p < 0.005, paired t-test), but not by the placebo. All changes for Test Formula II or III were high compared to placebo (1.9 to 4.9, Cohen's D), and high for Test Formula II vs I for running and biking (3.2 & 3.5, Cohen's D). In summary, a combination of certain mineral complexes plus two CENs improved aerobic exercise performance in fit young adult women.
SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics
Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf
2015-01-01
Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465
NASA Astrophysics Data System (ADS)
Budiman, M. A.; Rachmawati, D.; Jessica
2018-03-01
This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.
Aggregated Indexing of Biomedical Time Series Data
Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.
2016-01-01
Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298
Hardware-in-the-Loop Power Extraction Using Different Real-Time Platforms (PREPRINT)
2008-07-01
engine controller ( FADEC ). Incorporating various transient subsystem level models into a complex modeling tool can be a challenging process when each...used can also be modified or replaced as appropriate. In its current configuration, the generic turbine engine model’s FADEC runs primarily on a...simulation in real-time, two platforms were tested: dSPACE and National Instruments’ (NI) LabVIEW Real-Time. For both dSPACE and NI, the engine and FADEC
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
A case for Sandia investment in complex adaptive systems science and technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colbaugh, Richard; Tsao, Jeffrey Yeenien; Johnson, Curtis Martin
2012-05-01
This white paper makes a case for Sandia National Laboratories investments in complex adaptive systems science and technology (S&T) -- investments that could enable higher-value-added and more-robustly-engineered solutions to challenges of importance to Sandia's national security mission and to the nation. Complex adaptive systems are ubiquitous in Sandia's national security mission areas. We often ignore the adaptive complexity of these systems by narrowing our 'aperture of concern' to systems or subsystems with a limited range of function exposed to a limited range of environments over limited periods of time. But by widening our aperture of concern we could increase ourmore » impact considerably. To do so, the science and technology of complex adaptive systems must mature considerably. Despite an explosion of interest outside of Sandia, however, that science and technology is still in its youth. What has been missing is contact with real (rather than model) systems and real domain-area detail. With its center-of-gravity as an engineering laboratory, Sandia's has made considerable progress applying existing science and technology to real complex adaptive systems. It has focused much less, however, on advancing the science and technology itself. But its close contact with real systems and real domain-area detail represents a powerful strength with which to help complex adaptive systems science and technology mature. Sandia is thus both a prime beneficiary of, as well as potentially a prime contributor to, complex adaptive systems science and technology. Building a productive program in complex adaptive systems science and technology at Sandia will not be trivial, but a credible path can be envisioned: in the short run, continue to apply existing science and technology to real domain-area complex adaptive systems; in the medium run, jump-start the creation of new science and technology capability through Sandia's Laboratory Directed Research and Development program; and in the long run, inculcate an awareness at the Department of Energy of the importance of supporting complex adaptive systems science through its Office of Science.« less
Incorporating Flexibility in the Design of Repairable Systems - Design of Microgrids
2014-01-01
MICROGRIDS Vijitashwa Pandey1 Annette Skowronska1,2...optimization of complex systems such as a microgrid is however, computationally intensive. The problem is exacerbated if we must incorporate...flexibility in terms of allowing the microgrid architecture and its running protocol to change with time. To reduce the computational effort, this paper
The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency
Kiely, John
2017-01-01
In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term ‘coordination’ is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of ‘running coordination,’ and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse, misuse, and disuse—this spectrum of available ‘choice’ dysfunctionally contracts, and our capacity to safely disperse the mechanical ‘stress’ of running progressively diminishes. Now the running work burden falls increasingly on reduced populations of collaborating components. Accordingly our capacity to effectively manage, dissipate and accommodate running-imposed stress diminishes, and vulnerability to Overuse syndromes escalates. Awareness of the deep underpinnings of running coordination enhances conceptual clarity, thereby informing training and rehabilitation insights designed to offset the legacy of excessive or progressively accumulating exposure to running-imposed mechanical stress. PMID:28659838
DNA strand displacement system running logic programs.
Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr
2014-01-01
The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Novel Framework for Reduced Order Modeling of Aero-engine Components
NASA Astrophysics Data System (ADS)
Safi, Ali
The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration.
Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-02-24
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-01-01
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564
Economics of stocker production.
Peel, Derrell S
2006-07-01
The beef cattle industry, like any industry, is subject to economic signals to increase or decrease production according to short-run and long-run market conditions. Profitable stocker production is the result of careful matching of economic conditions to alternative animal production systems combined with sound animal and business management. The economics of stocker production are driven by the feeder cattle price-weight relation that combines broad market signals about how much production is needed with complex and subtle signals about how that production should be accomplished. The result is a dynamic set of values of gain that direct producers to adjust the level, type, and timing of stocker production according to changing market conditions.
Refocusing from a plenoptic camera within seconds on a mobile phone
NASA Astrophysics Data System (ADS)
Gómez-Cárdenes, Ã.`scar; Marichal-Hernández, José G.; Rosa, Fernando L.; Lüke, Jonas P.; Fernández-Valdivia, Juan José; Rodríguez-Ramos, José M.
2014-05-01
Refocusing a plenoptic image by digital means and after the exposure has been thoroughly studied in the last years, but few efforts have been made in the direction of real time implementation in a constrained environment such as that provided by current mobile phones and tablets. In this work we address the aforementioned challenge demonstrating that a complete focal stack, comprising 31 refocused planes from a (256ff16)2 plenoptic image, can be achieved within seconds by a current SoC mobile phone platform. The election of an appropriate algorithm is the key to success. In a previous work we developed an algorithm, the fast approximate 4D:3D discrete Radon transform, that performs this task with linear time complexity where others obtain quadratic or linearithmic time complexity. Moreover, that algorithm does not requires complex number transforms, trigonometric calculus nor even multiplications nor oat numbers. Our algorithm has been ported to a multi core ARM chip on an off-the-shelf tablet running Android. A careful implementation exploiting parallelism at several levels has been necessary. The final implementation takes advantage of multi-threading in native code and NEON SIMD instructions. As a result our current implementation completes the refocusing task within seconds for a 16 megapixels image, much faster than previous attempts running on powerful PC platforms or dedicated hardware. The times consumed by the different stages of the digital refocusing are given and the strategies to achieve this result are discussed. Time results are given for a variety of environments within Android ecosystem, from the weaker/cheaper SoCs to the top of the line for 2013.
Effects of human running cadence and experimental validation of the bouncing ball model
NASA Astrophysics Data System (ADS)
Bencsik, László; Zelei, Ambrus
2017-05-01
The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.
Body-terrain interaction affects large bump traversal of insects and legged robots.
Gart, Sean W; Li, Chen
2018-02-02
Small animals and robots must often rapidly traverse large bump-like obstacles when moving through complex 3D terrains, during which, in addition to leg-ground contact, their body inevitably comes into physical contact with the obstacles. However, we know little about the performance limits of large bump traversal and how body-terrain interaction affects traversal. To address these, we challenged the discoid cockroach and an open-loop six-legged robot to dynamically run into a large bump of varying height to discover the maximal traversal performance, and studied how locomotor modes and traversal performance are affected by body-terrain interaction. Remarkably, during rapid running, both the animal and the robot were capable of dynamically traversing a bump much higher than its hip height (up to 4 times the hip height for the animal and 3 times for the robot, respectively) at traversal speeds typical of running, with decreasing traversal probability with increasing bump height. A stability analysis using a novel locomotion energy landscape model explained why traversal was more likely when the animal or robot approached the bump with a low initial body yaw and a high initial body pitch, and why deflection was more likely otherwise. Inspired by these principles, we demonstrated a novel control strategy of active body pitching that increased the robot's maximal traversable bump height by 75%. Our study is a major step in establishing the framework of locomotion energy landscapes to understand locomotion in complex 3D terrains.
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Optimizing Mars Airplane Trajectory with the Application Navigation System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Riley, Derek
2004-01-01
Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Matias, Alessandra B; Taddei, Ulisses T; Duarte, Marcos; Sacco, Isabel C N
2016-04-14
Overall performance, particularly in a very popular sports activity such as running, is typically influenced by the status of the musculoskeletal system and the level of training and conditioning of the biological structures. Any change in the musculoskeletal system's biomechanics, especially in the feet and ankles, will strongly influence the biomechanics of runners, possibly predisposing them to injuries. A thorough understanding of the effects of a therapeutic approach focused on feet biomechanics, on strength and functionality of lower limb muscles will contribute to the adoption of more effective therapeutic and preventive strategies for runners. A randomized, prospective controlled and parallel trial with blind assessment is designed to study the effects of a "ground-up" therapeutic approach focused on the foot-ankle complex as it relates to the incidence of running-related injuries in the lower limbs. One hundred and eleven (111) healthy long-distance runners will be randomly assigned to either a control (CG) or intervention (IG) group. IG runners will participate in a therapeutic exercise protocol for the foot-ankle for 8 weeks, with 1 directly supervised session and 3 remotely supervised sessions per week. After the 8-week period, IG runners will keep exercising for the remaining 10 months of the study, supervised only by web-enabled software three times a week. At baseline, 2 months, 4 months and 12 months, all runners will be assessed for running-related injuries (primary outcome), time for the occurrence of the first injury, foot health and functionality, muscle trophism, intrinsic foot muscle strength, dynamic foot arch strain and lower-limb biomechanics during walking and running (secondary outcomes). This is the first randomized clinical trial protocol to assess the effect of an exercise protocol that was designed specifically for the foot-and-ankle complex on running-related injuries to the lower limbs of long-distance runners. We intend to show that the proposed protocol is an innovative and effective approach to decreasing the incidence of injuries. We also expect a lengthening in the time of occurrence of the first injury, an improvement in foot function, an increase in foot muscle mass and strength and beneficial biomechanical changes while running and walking after a year of exercising. Clinicaltrials.gov Identifier NCT02306148 (November 28, 2014) under the name "Effects of Foot Strengthening on the Prevalence of Injuries in Long Distance Runners". Committee of Ethics in Research of the School of Medicine of the University of Sao Paulo (18/03/2015, Protocol # 031/15).
Simple estimation of linear 1+1 D tsunami run-up
NASA Astrophysics Data System (ADS)
Fuentes, M.; Campos, J. A.; Riquelme, S.
2016-12-01
An analytical expression is derived concerning the linear run-up for any given initial wave generated over a sloping bathymetry. Due to the simplicity of the linear formulation, complex transformations are unnecessay, because the shoreline motion is directly obtained in terms of the initial wave. This analytical result not only supports maximum run-up invariance between linear and non-linear theories, but also the time evolution of shoreline motion and velocity. The results exhibit good agreement with the non-linear theory. The present formulation also allows computing the shoreline motion numerically from a customised initial waveform, including non-smooth functions. This is useful for numerical tests, laboratory experiments or realistic cases in which the initial disturbance might be retrieved from seismic data rather than using a theoretical model. It is also shown that the real case studied is consistent with the field observations.
NASA Technical Reports Server (NTRS)
Tselioudis, George; Douvis, Costas; Zerefos, Christos
2012-01-01
Current climate and future climate-warming runs with the RegCM Regional Climate Model (RCM) at 50 and 11 km-resolutions forced by the ECHAM GCM are used to examine whether the increased resolution of the RCM introduces novel information in the precipitation field when the models are run for the mountainous region of the Hellenic peninsula. The model results are inter-compared with the resolution of the RCM output degraded to match that of the GCM, and it is found that in both the present and future climate runs the regional models produce more precipitation than the forcing GCM. At the same time, the RCM runs produce increases in precipitation with climate warming even though they are forced with a GCM that shows no precipitation change in the region. The additional precipitation is mostly concentrated over the mountain ranges, where orographic precipitation formation is expected to be a dominant mechanism. It is found that, when examined at the same resolution, the elevation heights of the GCM are lower than those of the averaged RCM in the areas of the main mountain ranges. It is also found that the majority of the difference in precipitation between the RCM and the GCM can be explained by their difference in topographic height. The study results indicate that, in complex topography regions, GCM predictions of precipitation change with climate warming may be dry biased due to the GCM smoothing of the regional topography.
Quantum trajectories for time-dependent adiabatic master equations
NASA Astrophysics Data System (ADS)
Yip, Ka Wa; Albash, Tameem; Lidar, Daniel A.
2018-02-01
We describe a quantum trajectories technique for the unraveling of the quantum adiabatic master equation in Lindblad form. By evolving a complex state vector of dimension N instead of a complex density matrix of dimension N2, simulations of larger system sizes become feasible. The cost of running many trajectories, which is required to recover the master equation evolution, can be minimized by running the trajectories in parallel, making this method suitable for high performance computing clusters. In general, the trajectories method can provide up to a factor N advantage over directly solving the master equation. In special cases where only the expectation values of certain observables are desired, an advantage of up to a factor N2 is possible. We test the method by demonstrating agreement with direct solution of the quantum adiabatic master equation for 8-qubit quantum annealing examples. We also apply the quantum trajectories method to a 16-qubit example originally introduced to demonstrate the role of tunneling in quantum annealing, which is significantly more time consuming to solve directly using the master equation. The quantum trajectories method provides insight into individual quantum jump trajectories and their statistics, thus shedding light on open system quantum adiabatic evolution beyond the master equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John
Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach.
Zakov, Shay; Tsur, Dekel; Ziv-Ukelson, Michal
2011-08-18
RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms.
Reducing the worst case running times of a family of RNA and CFG problems, using Valiant's approach
2011-01-01
Background RNA secondary structure prediction is a mainstream bioinformatic domain, and is key to computational analysis of functional RNA. In more than 30 years, much research has been devoted to defining different variants of RNA structure prediction problems, and to developing techniques for improving prediction quality. Nevertheless, most of the algorithms in this field follow a similar dynamic programming approach as that presented by Nussinov and Jacobson in the late 70's, which typically yields cubic worst case running time algorithms. Recently, some algorithmic approaches were applied to improve the complexity of these algorithms, motivated by new discoveries in the RNA domain and by the need to efficiently analyze the increasing amount of accumulated genome-wide data. Results We study Valiant's classical algorithm for Context Free Grammar recognition in sub-cubic time, and extract features that are common to problems on which Valiant's approach can be applied. Based on this, we describe several problem templates, and formulate generic algorithms that use Valiant's technique and can be applied to all problems which abide by these templates, including many problems within the world of RNA Secondary Structures and Context Free Grammars. Conclusions The algorithms presented in this paper improve the theoretical asymptotic worst case running time bounds for a large family of important problems. It is also possible that the suggested techniques could be applied to yield a practical speedup for these problems. For some of the problems (such as computing the RNA partition function and base-pair binding probabilities), the presented techniques are the only ones which are currently known for reducing the asymptotic running time bounds of the standard algorithms. PMID:21851589
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.
Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile
2015-01-01
In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Nagata, Masatoshi; Yanagihara, Dai; Tomioka, Ryohei; Utsumi, Hideko; Kubota, Yasuo; Yagi, Takeshi; Graybiel, Ann M.; Yamamori, Tetsuo
2011-01-01
Motor control is critical in daily life as well as in artistic and athletic performance and thus is the subject of intense interest in neuroscience. Mouse models of movement disorders have proven valuable for many aspects of investigation, but adequate methods for analyzing complex motor control in mouse models have not been fully established. Here, we report the development of a novel running-wheel system that can be used to evoke simple and complex stepping patterns in mice. The stepping patterns are controlled by spatially organized pegs, which serve as footholds that can be arranged in adjustable, ladder-like configurations. The mice run as they drink water from a spout, providing reward, while the wheel turns at a constant speed. The stepping patterns of the mice can thus be controlled not only spatially, but also temporally. A voltage sensor to detect paw touches is attached to each peg, allowing precise registration of footfalls. We show that this device can be used to analyze patterns of complex motor coordination in mice. We further demonstrate that it is possible to measure patterns of neural activity with chronically implanted tetrodes as the mice engage in vigorous running bouts. We suggest that this instrumented multipeg running wheel (which we name the Step-Wheel System) can serve as an important tool in analyzing motor control and motor learning in mice. PMID:21525375
Kitsukawa, Takashi; Nagata, Masatoshi; Yanagihara, Dai; Tomioka, Ryohei; Utsumi, Hideko; Kubota, Yasuo; Yagi, Takeshi; Graybiel, Ann M; Yamamori, Tetsuo
2011-07-01
Motor control is critical in daily life as well as in artistic and athletic performance and thus is the subject of intense interest in neuroscience. Mouse models of movement disorders have proven valuable for many aspects of investigation, but adequate methods for analyzing complex motor control in mouse models have not been fully established. Here, we report the development of a novel running-wheel system that can be used to evoke simple and complex stepping patterns in mice. The stepping patterns are controlled by spatially organized pegs, which serve as footholds that can be arranged in adjustable, ladder-like configurations. The mice run as they drink water from a spout, providing reward, while the wheel turns at a constant speed. The stepping patterns of the mice can thus be controlled not only spatially, but also temporally. A voltage sensor to detect paw touches is attached to each peg, allowing precise registration of footfalls. We show that this device can be used to analyze patterns of complex motor coordination in mice. We further demonstrate that it is possible to measure patterns of neural activity with chronically implanted tetrodes as the mice engage in vigorous running bouts. We suggest that this instrumented multipeg running wheel (which we name the Step-Wheel System) can serve as an important tool in analyzing motor control and motor learning in mice.
NASA Astrophysics Data System (ADS)
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-06-01
We propose a cheat sensitive quantum protocol to perform a private search on a classical database which is efficient in terms of communication complexity. It allows a user to retrieve an item from the database provider without revealing which item he or she retrieved: if the provider tries to obtain information on the query, the person querying the database can find it out. The protocol ensures also perfect data privacy of the database: the information that the user can retrieve in a single query is bounded and does not depend on the size of the database. With respect to the known (quantum and classical) strategies for private information retrieval, our protocol displays an exponential reduction in communication complexity and in running-time computational complexity.
Li, Jing Xian; Xu, Dong Qing; Hoshizaki, Blaine
2009-01-01
This study examined the proprioception of the foot and ankle complex in regular ice hockey practitioners, runners, and ballet dancers. A total of 45 young people with different exercise habits formed four groups: the ice hockey, ballet dancing, running, and sedentary groups. Kinesthesia of the foot and ankle complex was measured in plantarflexion (PF), dorsiflexion (DF), inversion (IV), and eversion (EV) at 0.4 degrees /s using a custom-made device. The results showed the following: (1) significantly better perceived passive motion sense in PF/DF was found as compared with the measurements in IV/EV within each group (P < .01); (2) ice hockey and ballet groups perceived significantly better passive motion sense in IV/EV than the running (P < .05) and the sedentary (P < .01) groups; and (3) no significant difference in the all measurements was found between running and sedentary groups. The benefits of ice hockey and ballet dancing on proprioception may be associated with their movement characteristics.
Grégoire, Catherine-Alexandra; Tobin, Stephanie; Goldenstein, Brianna L; Samarut, Éric; Leclerc, Andréanne; Aumont, Anne; Drapeau, Pierre; Fulton, Stephanie; Fernandes, Karl J L
2018-01-01
Environmental enrichment (EE) is a powerful stimulus of brain plasticity and is among the most accessible treatment options for brain disease. In rodents, EE is modeled using multi-factorial environments that include running, social interactions, and/or complex surroundings. Here, we show that running and running-independent EE differentially affect the hippocampal dentate gyrus (DG), a brain region critical for learning and memory. Outbred male CD1 mice housed individually with a voluntary running disk showed improved spatial memory in the radial arm maze compared to individually- or socially-housed mice with a locked disk. We therefore used RNA sequencing to perform an unbiased interrogation of DG gene expression in mice exposed to either a voluntary running disk (RUN), a locked disk (LD), or a locked disk plus social enrichment and tunnels [i.e., a running-independent complex environment (CE)]. RNA sequencing revealed that RUN and CE mice showed distinct, non-overlapping patterns of transcriptomic changes versus the LD control. Bio-informatics uncovered that the RUN and CE environments modulate separate transcriptional networks, biological processes, cellular compartments and molecular pathways, with RUN preferentially regulating synaptic and growth-related pathways and CE altering extracellular matrix-related functions. Within the RUN group, high-distance runners also showed selective stress pathway alterations that correlated with a drastic decline in overall transcriptional changes, suggesting that excess running causes a stress-induced suppression of running's genetic effects. Our findings reveal stimulus-dependent transcriptional signatures of EE on the DG, and provide a resource for generating unbiased, data-driven hypotheses for novel mediators of EE-induced cognitive changes.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
Planning perception and action for cognitive mobile manipulators
NASA Astrophysics Data System (ADS)
Gaschler, Andre; Nogina, Svetlana; Petrick, Ronald P. A.; Knoll, Alois
2013-12-01
We present a general approach to perception and manipulation planning for cognitive mobile manipulators. Rather than hard-coding single purpose robot applications, a robot should be able to reason about its basic skills in order to solve complex problems autonomously. Humans intuitively solve tasks in real-world scenarios by breaking down abstract problems into smaller sub-tasks and use heuristics based on their previous experience. We apply a similar idea for planning perception and manipulation to cognitive mobile robots. Our approach is based on contingent planning and run-time sensing, integrated in our knowledge of volumes" planning framework, called KVP. Using the general-purpose PKS planner, we model information-gathering actions at plan time that have multiple possible outcomes at run time. As a result, perception and sensing arise as necessary preconditions for manipulation, rather than being hard-coded as tasks themselves. We demonstrate the e ectiveness of our approach on two scenarios covering visual and force sensing on a real mobile manipulator.
Design, Control and in Situ Visualization of Gas Nitriding Processes
Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy
2010-01-01
The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536
NASA Astrophysics Data System (ADS)
Itoh, Shinichi; Ueno, Kenji; Ohkubo, Ryuji; Sagehashi, Hidenori; Funahashi, Yoshisato; Yokoo, Tetsuya
2012-01-01
We developed a T0 chopper rotating at 100 Hz at the High Energy Accelerator Research Organization (KEK) for the reduction of background noise in neutron scattering experiments at the Japan Proton Accelerator Research Complex (J-PARC). The T0 chopper consists of a rotor of 120 kg made from Inconel X750, supported by mechanical bearings in vacuum. The motor is located outside the vacuum and the rotation is transmitted into vacuum through magnetic seals. The motor should rotate in synchronization with the production timing of pulsed neutrons. The rotational fluctuations and running time were in good agreement with the specifications, i.e., phase control accuracy of less than 5 μs and running time of more than 4000 h without changing any component. A semi-auto installation mechanism was developed for installing under the shielding and for maintenance purposes. Based on the result of the development, actual machines were made for the neutron beamlines at J-PARC. We successfully reduced the background noise to 1/30 at neutron energies near 500 meV.
Ahmed, Afaz Uddin; Arablouei, Reza; Hoog, Frank de; Kusy, Branislav; Jurdak, Raja; Bergmann, Neil
2018-05-29
Channel state information (CSI) collected during WiFi packet transmissions can be used for localization of commodity WiFi devices in indoor environments with multipath propagation. To this end, the angle of arrival (AoA) and time of flight (ToF) for all dominant multipath components need to be estimated. A two-dimensional (2D) version of the multiple signal classification (MUSIC) algorithm has been shown to solve this problem using 2D grid search, which is computationally expensive and is therefore not suited for real-time localisation. In this paper, we propose using a modified matrix pencil (MMP) algorithm instead. Specifically, we show that the AoA and ToF estimates can be found independently of each other using the one-dimensional (1D) MMP algorithm and the results can be accurately paired to obtain the AoA⁻ToF pairs for all multipath components. Thus, the 2D estimation problem reduces to running 1D estimation multiple times, substantially reducing the computational complexity. We identify and resolve the problem of degenerate performance when two or more multipath components have the same AoA. In addition, we propose a packet aggregation model that uses the CSI data from multiple packets to improve the performance under noisy conditions. Simulation results show that our algorithm achieves two orders of magnitude reduction in the computational time over the 2D MUSIC algorithm while achieving similar accuracy. High accuracy and low computation complexity of our approach make it suitable for applications that require location estimation to run on resource-constrained embedded devices in real time.
Pathways to designing and running an operational flood forecasting system: an adventure game!
NASA Astrophysics Data System (ADS)
Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma
2017-04-01
In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.
The Web Based Monitoring Project at the CMS Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf
The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To the end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters,more » including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).« less
Optimized Diffusion of Run-and-Tumble Particles in Crowded Environments
NASA Astrophysics Data System (ADS)
Bertrand, Thibault; Zhao, Yongfeng; Bénichou, Olivier; Tailleur, Julien; Voituriez, Raphaël
2018-05-01
We study the transport of self-propelled particles in dynamic complex environments. To obtain exact results, we introduce a model of run-and-tumble particles (RTPs) moving in discrete time on a d -dimensional cubic lattice in the presence of diffusing hard-core obstacles. We derive an explicit expression for the diffusivity of the RTP, which is exact in the limit of low density of fixed obstacles. To do so, we introduce a generalization of Kac's theorem on the mean return times of Markov processes, which we expect to be relevant for a large class of lattice gas problems. Our results show the diffusivity of RTPs to be nonmonotonic in the tumbling probability for low enough obstacle mobility. These results prove the potential for the optimization of the transport of RTPs in crowded and disordered environments with applications to motile artificial and biological systems.
The web based monitoring project at the CMS experiment
NASA Astrophysics Data System (ADS)
Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf; Chakaberia, Irakli; Jo, Youngkwon; Maeshima, Kaori; Maruyama, Sho; Patrick, James; Rapsevicius, Valdas; Soha, Aron; Stankevicius, Mantas; Sulmanas, Balys; Toda, Sachiko; Wan, Zongru
2017-10-01
The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To that end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).
Optimizing distance-based methods for large data sets
NASA Astrophysics Data System (ADS)
Scholl, Tobias; Brenner, Thomas
2015-10-01
Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.
Solving Equations of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Lim, Christopher
2007-01-01
Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
The Shock and Vibration Digest. Volume 13. Number 2
1981-02-01
accuracy, running time, 56(4), pp 1084-1091 (Oct 1974). core storage, complexity of program execution, ease of implementation, ease of effecting slight...consist of sessions on such topics as optimality criteria meth- specialized and elaborate developments but also of ods, mathematical programming ...Steininger - MBB, Ottobrunn, Ger- Gunfire Blast Pressure Predictions many R. Munt - RAE Aero., UK Aircraft Fuel Tank Slosh and Vibration Test Development of
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Is Single-Port Laparoscopy More Precise and Faster with the Robot?
Fransen, Sofie A F; van den Bos, Jacqueline; Stassen, Laurents P S; Bouvy, Nicole D
2016-11-01
Single-port laparoscopy is a step forward toward nearly scar less surgery. Concern has been raised that single-incision laparoscopic surgery (SILS) is technically more challenging because of the lack of triangulation and the clashing of instruments. Robotic single-incision laparoscopic surgery (RSILS) in chopstick setting might overcome these problems. This study evaluated the outcome in time and errors of two tasks of the Fundamentals of Laparoscopic Surgery on a dry platform, in two settings: SILS versus RSILS. Nine experienced laparoscopic surgeons performed two tasks: peg transfer and a suturing task, on a standard box trainer. All participants practiced each task three times in both settings: SILS and a RSILS setting. The assessment scores (time and errors) were recorded. For the first task of peg transfer, RSILS was significantly better in time (124 versus 230 seconds, P = .0004) and errors (0.80 errors versus 2.60 errors, P = .024) at the first run, compared to the SILS setting. At the third and final run, RSILS still proved to be significantly better in errors (0.10 errors versus 0.80 errors, P = .025) compared to the SILS group. RSILS was faster in the third run, but not significant (116 versus 157 seconds, P = .08). For the second task, a suturing task, only 3 participants of the SILS group were able to perform this task within the set time frame of 600 seconds. There was no significant difference in time in the three runs between SILS and RSILS for the 3 participants that fulfilled both tasks within the 600 seconds. This study shows that robotic single-port surgery seems easier, faster, and more precise to perform basis tasks of the Fundamentals of laparoscopic surgery. For the more complex task of suturing, only the single-port robotic setting enabled all participants to fulfill this task, within the set time frame.
Lohman, Everett B; Balan Sackiriyas, Kanikkai Steni; Swen, R Wesley
2011-11-01
Recreational running has many proven benefits which include increased cardiovascular, physical and mental health. It is no surprise that Running USA reported over 10 million individuals completed running road races in 2009 not to mention recreational joggers who do not wish to compete in organized events. Unfortunately there are numerous risks associated with running, the most common being musculoskeletal injuries attributed to incorrect shoe choice, training errors and excessive shoe wear or other biomechanical factors associated with ground reaction forces. Approximately 65% of chronic injuries in distance runners are related to routine high mileage, rapid increases in mileage, increased intensity, hills or irregular surface running, and surface firmness. Humans have been running barefooted or wearing minimally supportive footwear such as moccasins or sandals since the beginning of time while modernized running shoes were not invented until the 1970s. However, the current trend is that many runners are moving back to barefoot running or running in "minimal" shoes. The goal of this masterclass article is to examine the similarities and differences between shod and unshod (barefoot or minimally supportive running shoes) runners by examining spatiotemporal parameters, energetics, and biomechanics. These running parameters will be compared and contrasted with walking. The most obvious difference between the walking and running gait cycle is the elimination of the double limb support phase of walking gait in exchange for a float (no limb support) phase. The biggest difference between barefoot and shod runners is at the initial contact phase of gait where the barefoot and minimally supported runner initiates contact with their forefoot or midfoot instead of the rearfoot. As movement science experts, physical therapists are often called upon to assess the gait of a running athlete, their choice of footwear, and training regime. With a clearer understanding of running and its complexities, the physical therapist will be able to better identify faults and create informed treatment plans while rehabilitating patients who are experiencing musculoskeletal injuries due to running. Copyright © 2011 Elsevier Ltd. All rights reserved.
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
NASA Astrophysics Data System (ADS)
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
Using AberOWL for fast and scalable reasoning over BioPortal ontologies.
Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert
2016-08-08
Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.
Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Feikema, Douglas A.
2003-01-01
This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Dong, Shuqing; Gao, Ruibin; Yang, Yan; Guo, Mei; Ni, Jingman; Zhao, Liang
2014-03-15
Although the separation efficiency of capillary electrophoresis (CE) is much higher than that of other chromatographic methods, it is sometimes difficult to adequately separate the complex ingredients in biological samples. This article describes how one effective and simple way to develop the separation efficiency in CE is to add some modifiers to the running buffer. The suitable running buffer modifier β-cyclodextrin (β-CD) was explored to fast and completely separate four phenylethanoid glycosides and aglycones (homovanillyl alcohol, hydroxytyrosol, 3,4-dimethoxycinnamic acid, and caffeic acid) in Lamiophlomis rotata (Lr) and Cistanche by capillary zone electrophoresis with ultraviolet (UV) detection. It was found that when β-CD was used as running buffer modifier, a baseline separation of the four analytes could be accomplished in less than 20 min and the detection limits were as low as 10(-3) mg L(-1). Other factors affecting the CE separation, such as working potential, pH value and ionic strength of running buffer, separation voltage, and sample injection time, were investigated extensively. Under the optimal conditions, a successful practical application on the determination of Lr and Cistanche samples confirmed the validity and practicability of this method. Copyright © 2014 Elsevier Inc. All rights reserved.
2. X15 RUN UP AREA (Jan 59). A sharp, higher ...
2. X-15 RUN UP AREA (Jan 59). A sharp, higher altitide low oblique aerial view to the north, showing runway, at far left; X-15 Engine Test Complex in the center. This view predates construction of observation bunkers. - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA
Continuous-time quantum search on balanced trees
NASA Astrophysics Data System (ADS)
Philipp, Pascal; Tarrataca, Luís; Boettcher, Stefan
2016-03-01
We examine the effect of network heterogeneity on the performance of quantum search algorithms. To this end, we study quantum search on a tree for the oracle Hamiltonian formulation employed by continuous-time quantum walks. We use analytical and numerical arguments to show that the exponent of the asymptotic running time ˜Nβ changes uniformly from β =0.5 to β =1 as the searched-for site is moved from the root of the tree towards the leaves. These results imply that the time complexity of the quantum search algorithm on a balanced tree is closely correlated with certain path-based centrality measures of the searched-for site.
Fast support vector data descriptions for novelty detection.
Liu, Yi-Hung; Liu, Yan-Chen; Chen, Yen-Jen
2010-08-01
Support vector data description (SVDD) has become a very attractive kernel method due to its good results in many novelty detection problems. However, the decision function of SVDD is expressed in terms of the kernel expansion, which results in a run-time complexity linear in the number of support vectors. For applications where fast real-time response is needed, how to speed up the decision function is crucial. This paper aims at dealing with the issue of reducing the testing time complexity of SVDD. A method called fast SVDD (F-SVDD) is proposed. Unlike the traditional methods which all try to compress a kernel expansion into one with fewer terms, the proposed F-SVDD directly finds the preimage of a feature vector, and then uses a simple relationship between this feature vector and the SVDD sphere center to re-express the center with a single vector. The decision function of F-SVDD contains only one kernel term, and thus the decision boundary of F-SVDD is only spherical in the original space. Hence, the run-time complexity of the F-SVDD decision function is no longer linear in the support vectors, but is a constant, no matter how large the training set size is. In this paper, we also propose a novel direct preimage-finding method, which is noniterative and involves no free parameters. The unique preimage can be obtained in real time by the proposed direct method without taking trial-and-error. For demonstration, several real-world data sets and a large-scale data set, the extended MIT face data set, are used in experiments. In addition, a practical industry example regarding liquid crystal display micro-defect inspection is also used to compare the applicability of SVDD and our proposed F-SVDD when faced with mass data input. The results are very encouraging.
Ada 9X Project Revision Request Report. Supplement 1
1990-01-01
Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in
Case acceptance: no random acts allowed.
McAnally, James
2009-12-01
Consider implementing a case acceptance system in your office to fully utilize your hard-earned clinical skills, and to experience the professional rewards that come with serving more patients at a higher level. Doctors who are willing to commit the time and resources necessary to improving case acceptance will increase the number of implant cases entering the treatment phase--cases that run the gamut of complexity and support fees commensurate with their skills!
A Study on Run Time Assurance for Complex Cyber Physical Systems
2013-04-18
safety verification approach was applied to synchronization of distributed local clocks of the nodes on a CAN bus by Jiang et al. [36]. The class of...mode of interaction between the instrumented system and the checker, we distin- guish between synchronous and asynchronous monitoring. In synchronous ...occurred. Synchronous monitoring may deliver a higher degree of assurance than the asynchronous one, because it can block a dangerous action. However
NASA Astrophysics Data System (ADS)
Pryahina, G.; Zelepukina, E.; Guzel, N.
2012-04-01
Hydrological characteristics calculations of the small mountain rivers in the basins with glaciers frequently cause complexity in connection with absence of standard hydrological supervision within remote mountain territories. The unique way of the actual information reception on a water mode of such rivers is field work. The rivers of the mountain Mongun-taiga located on a joint of Altai and Sayan mountains became hydrological researches objects of Russian geographical society complex expeditions in 2010-2011. The Mongun-taiga cluster of international biosphere reserve "Ubsunurskaya hollow" causes heightened interest of researchers — geographers for many years. The original landscape map in scale 1:100000 has been made, hydrological supervision on the rivers East Mugur and ugur, belonging inland basin of Internal Asia are lead. Supervision over the river drain East Mugur runoff were spent in profile of glacier tongue (the freezing area - 22 % (3.2 km2) from the reception basin) and in the closing alignment of the river located on distance of 3,4 km below tongue of glacier. During researches following results have been received. During the ablation period diurnal fluctuations with a strongly shown maximum and minimum of water discharges are typically for the small rivers with considerable share of a glacial food. The run-off maximum from the glacier takes place from 2 to 7 p.m., the run-off minimum is observed early in the morning. High speed of thawed snow running-off from glacier tongue and rather small volume of dynamic stocks water on an ice surface lead to growth of water discharge. In the bottom profile the time of maximum and minimum of water discharge is displaced on the average 2 hours, it depends of the water travel time. Maximum glacial run-off discharge (1.12 m3/s) in the upper profile was registered on July 16 (it was not rain). Volumes of daily runoff in the upper and bottom profiles were 60700-67600 m3 that day. The run-off from nonglacial part of the basin is formed by underground waters and melting snowfields, during the absence of rainfall period the part of one amounted to 10% of the run-off in the lower profile. We suggest that this water discharge corresponds to base flow value in the lower profile because the area of snowfields of the basin was < 0.1 km2 that year. Run-off monitoring has showed that rivers with a small glacial food are characterized by absence of diurnal balance of runoff. During rainfall the water content of river has being increased due to substantial derivation of basin and, as a result, fast flowing rain water into bed of river. The sharp decrease in water content of river during periods of rainfall absence indicates low inventory of soil and groundwater and the low rate of glacial. Thus, glaciers and character of the relief influence the formation of run-off small mountain rivers. Results of researches will be used for mathematical modeling mountain rivers run-off.
Kishimoto, Mai; Tsuchiaka, Shinobu; Rahpaya, Sayed Samim; Hasebe, Ayako; Otsu, Keiko; Sugimura, Satoshi; Kobayashi, Suguru; Komatsu, Natsumi; Nagai, Makoto; Omatsu, Tsutomu; Naoi, Yuki; Sano, Kaori; Okazaki-Terashima, Sachiko; Oba, Mami; Katayama, Yukie; Sato, Reiichiro; Asai, Tetsuo; Mizutani, Tetsuya
2017-03-18
Bovine respiratory disease complex (BRDC) is frequently found in cattle worldwide. The etiology of BRDC is complicated by infections with multiple pathogens, making identification of the causal pathogen difficult. Here, we developed a detection system by applying TaqMan real-time PCR (Dembo respiratory-PCR) to screen a broad range of microbes associated with BRDC in a single run. We selected 16 bovine respiratory pathogens (bovine viral diarrhea virus, bovine coronavirus, bovine parainfluenza virus 3, bovine respiratory syncytial virus, influenza D virus, bovine rhinitis A virus, bovine rhinitis B virus, bovine herpesvirus 1, bovine adenovirus 3, bovine adenovirus 7, Mannheimia haemolytica, Pasteurella multocida, Histophilus somni, Trueperella pyogenes, Mycoplasma bovis and Ureaplasma diversum) as detection targets and designed novel specific primer-probe sets for nine of them. The assay performance was assessed using standard curves from synthesized DNA. In addition, the sensitivity of the assay was evaluated by spiking solutions extracted from nasal swabs that were negative by Dembo respiratory-PCR for nucleic acids of pathogens or synthesized DNA. All primer-probe sets showed high sensitivity. In this study, a total of 40 nasal swab samples from cattle on six farms were tested by Dembo respiratory-PCR. Dembo respiratory-PCR can be applied as a screening system with wide detection targets.
A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation
NASA Astrophysics Data System (ADS)
Cho, Min Hyung; Cai, Wei
2010-12-01
A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.
Integrating planning and reactive control
NASA Technical Reports Server (NTRS)
Wilkins, David E.; Myers, Karen L.
1994-01-01
Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.
Integrating planning and reactive control
NASA Astrophysics Data System (ADS)
Wilkins, David E.; Myers, Karen L.
1994-10-01
Our research is developing persistent agents that can achieve complex tasks in dynamic and uncertain environments. We refer to such agents as taskable, reactive agents. An agent of this type requires a number of capabilities. The ability to execute complex tasks necessitates the use of strategic plans for accomplishing tasks; hence, the agent must be able to synthesize new plans at run time. The dynamic nature of the environment requires that the agent be able to deal with unpredictable changes in its world. As such, agents must be able to react to unanticipated events by taking appropriate actions in a timely manner, while continuing activities that support current goals. The unpredictability of the world could lead to failure of plans generated for individual tasks. Agents must have the ability to recover from failures by adapting their activities to the new situation, or replanning if the world changes sufficiently. Finally, the agent should be able to perform in the face of uncertainty. The Cypress system, described here, provides a framework for creating taskable, reactive agents. Several features distinguish our approach: (1) the generation and execution of complex plans with parallel actions; (2) the integration of goal-driven and event driven activities during execution; (3) the use of evidential reasoning for dealing with uncertainty; and (4) the use of replanning to handle run-time execution problems. Our model for a taskable, reactive agent has two main intelligent components, an executor and a planner. The two components share a library of possible actions that the system can take. The library encompasses a full range of action representations, including plans, planning operators, and executable procedures such as predefined standard operating procedures (SOP's). These three classes of actions span multiple levels of abstraction.
Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena
2017-02-01
Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.
2010-08-01
paraffins, olefins, cyclo-parafins ( naphthenes ), aromatics and a host of trace species. Petroleum distillates such as jet fuels are also a complex...LC method consisted of: Mobile Phase: 95% CH3OH + 0.1% (vol) Acetic Acid 5% De-Ionized H2O Injection Volume: 5 µL Needle Wash in Flush...Port for 20 seconds using mobile phase CH3OH + 0.1% (vol) Acetic- Acid Run Time: 10 minute Post Time: 1 minute Binary Pump SL Flow Rate: 0.3 ml/min
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
NASA Astrophysics Data System (ADS)
Lang, C.; Fettweis, X.; Kittel, C.; Erpicum, M.
2017-12-01
We present the results of high resolution simulations of the climate and SMB of Svalbard with the regional climate model MAR forced by ERA-40 then ERA-Interim, as well as an online downscaling method allowing us to model the SMB and its components at a resolution twice as high (2.5 vs 5 km here) using only about 25% more CPU time. Spitsbergen, the largest island in Svalbard, has a very hilly topography and a high spatial resolution is needed to correctly represent the local topography and the complex pattern of ice distribution and precipitation. However, high resolution runs with an RCM fully coupled to an energy balance module like MAR require a huge amount of computation time. The hydrostatic equilibrium hypothesis used in MAR also becomes less valid as the spatial resolution increases. We therefore developed in MAR a method to run the snow module at a resolution twice as high as the atmospheric module. Near-surface temperature and humidity are corrected on a grid with a resolution twice as high, as a function of their local gradients and the elevation difference between the corresponding pixels in the 2 grids. We compared the results of our runs at 5 km and with SMB downscaled at 2.5 km over 1960 — 2016 and compared those to previous 10 km runs. On Austfonna, where the slopes are gentle, the agreement between observations and the 5 km SMB is better than with the 10 km SMB. It is again improved at 2.5 km but the gain is relatively small, showing the interest of our method rather than running a time consuming classic 2.5 km resolution simulation. On Spitsbergen, we show that a spatial resolution of 2.5 km is still not enough to represent the complex pattern of topography, precipitation and SMB. Due to a change in the summer atmospheric circulation, from a westerly flow over Svalbard to a northwesterly flow bringing colder air, the SMB of Svalbard was stable between 2006 and 2012, while several melt records were broken in Greenland, due to conditions more anticyclonic than usual. In 2013, the reverse situation happened and a southwesterly atmospheric circulation brought warmer air over Svalbard. The SMB broke the last 55 years' record. In 2016, the temperature was higher than average and a new record melt was broken despite a northwesterly flow. The northerly flow still mitigated the warming over Svalbard, which was much lower than most regions of the Arctic.
Advances in analytical methodologies to guide bioprocess engineering for bio-therapeutics.
Saldova, Radka; Kilcoyne, Michelle; Stöckmann, Henning; Millán Martín, Silvia; Lewis, Amanda M; Tuite, Catherine M E; Gerlach, Jared Q; Le Berre, Marie; Borys, Michael C; Li, Zheng Jian; Abu-Absi, Nicholas R; Leister, Kirk; Joshi, Lokesh; Rudd, Pauline M
2017-03-01
This study was performed to monitor the glycoform distribution of a recombinant antibody fusion protein expressed in CHO cells over the course of fed-batch bioreactor runs using high-throughput methods to accurately determine the glycosylation status of the cell culture and its product. Three different bioreactors running similar conditions were analysed at the same five time-points using the advanced methods described here. N-glycans from cell and secreted glycoproteins from CHO cells were analysed by HILIC-UPLC and MS, and the total glycosylation (both N- and O-linked glycans) secreted from the CHO cells were analysed by lectin microarrays. Cell glycoproteins contained mostly high mannose type N-linked glycans with some complex glycans; sialic acid was α-(2,3)-linked, galactose β-(1,4)-linked, with core fucose. Glycans attached to secreted glycoproteins were mostly complex with sialic acid α-(2,3)-linked, galactose β-(1,4)-linked, with mostly core fucose. There were no significant differences noted among the bioreactors in either the cell pellets or supernatants using the HILIC-UPLC method and only minor differences at the early time-points of days 1 and 3 by the lectin microarray method. In comparing different time-points, significant decreases in sialylation and branching with time were observed for glycans attached to both cell and secreted glycoproteins. Additionally, there was a significant decrease over time in high mannose type N-glycans from the cell glycoproteins. A combination of the complementary methods HILIC-UPLC and lectin microarrays could provide a powerful and rapid HTP profiling tool capable of yielding qualitative and quantitative data for a defined biopharmaceutical process, which would allow valuable near 'real-time' monitoring of the biopharmaceutical product. Copyright © 2016 Elsevier Inc. All rights reserved.
Influence of Running and Walking on Hormonal Regulators of Appetite in Women
Larson-Meyer, D. Enette; Palm, Sonnie; Bansal, Aasthaa; Austin, Kathleen J.; Hart, Ann Marie; Alexander, Brenda M.
2012-01-01
Nine female runners and ten walkers completed a 60 min moderate-intensity (70% VO2max) run or walk, or 60 min rest in counterbalanced order. Plasma concentrations of the orexogenic peptide ghrelin, anorexogenic peptides peptide YY (PYY), glucagon-like peptide-1 (GLP-1), and appetite ratings were measured at 30 min interval for 120 min, followed by a free-choice meal. Both orexogenic and anorexogenic peptides were elevated after running, but no changes were observed after walking. Relative energy intake (adjusted for cost of exercise/rest) was negative in the meal following running (−194 ± 206 kcal) versus walking (41 ± 196 kcal) (P = 0.015), although both were suppressed (P < 0.05) compared to rest (299 ± 308 and 284 ± 121 kcal, resp.). The average rate of change in PYY and GLP-1 over time predicted appetite in runners, but only the change in GLP-1 predicted hunger (P = 0.05) in walkers. Results provide evidence that exercise-induced alterations in appetite are likely driven by complex changes in appetite-regulating hormones rather than change in a single gut peptide. PMID:22619704
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
Network geometry inference using common neighbors
NASA Astrophysics Data System (ADS)
Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri
2015-08-01
We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O (t4) running time to map a network of t nodes, versus O (t3) in the link-based method. But we also develop a hybrid method with O (t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O (t2) , without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections.
Mixing times in quantum walks on two-dimensional grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquezino, F. L.; Portugal, R.; Abal, G.
2010-10-15
Mixing properties of discrete-time quantum walks on two-dimensional grids with toruslike boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and themore » running time of the corresponding abstract search algorithm is discussed.« less
Mixing times in quantum walks on two-dimensional grids
NASA Astrophysics Data System (ADS)
Marquezino, F. L.; Portugal, R.; Abal, G.
2010-10-01
Mixing properties of discrete-time quantum walks on two-dimensional grids with toruslike boundary conditions are analyzed, focusing on their connection to the complexity of the corresponding abstract search algorithm. In particular, an exact expression for the stationary distribution of the coherent walk over odd-sided lattices is obtained after solving the eigenproblem for the evolution operator for this particular graph. The limiting distribution and mixing time of a quantum walk with a coin operator modified as in the abstract search algorithm are obtained numerically. On the basis of these results, the relation between the mixing time of the modified walk and the running time of the corresponding abstract search algorithm is discussed.
BigDataScript: a scripting language for data pipelines.
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. © The Author 2014. Published by Oxford University Press.
BigDataScript: a scripting language for data pipelines
Cingolani, Pablo; Sladek, Rob; Blanchette, Mathieu
2015-01-01
Motivation: The analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability. Results: We introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code. Availability and implementation: BigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript. Contact: pablo.e.cingolani@gmail.com PMID:25189778
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Sacino, Amanda N; Shuster, Jonathan J; Nowicki, Kamil; Carek, Peter J; Wegman, Martin P; Listhaus, Alyson; Gibney, Joseph M; Chang, Ku-Lang
2016-02-01
As the number of patients with access to care increases, outpatient clinics will need to implement innovative strategies to maintain or enhance clinic efficiency. One viable alternative involves reverse triage. A reverse triage protocol was implemented during a student-run free clinic. Each patient's chief complaint(s) were obtained at the beginning of the clinic session and ranked by increasing complexity. "Complexity" was defined as the subjective amount of time required to provide a full, thorough evaluation of a patient. Less complex cases were prioritized first since they could be expedited through clinic processing and allow for more time and resources to be dedicated to complex cases. Descriptive statistics were used to characterize and summarize the data obtained. Categorical variables were analyzed using chi-square. A time series analysis of the outcome versus centered time in weeks was also conducted. The average number of patients seen per clinic session increased by 35% (9.5 versus 12.8) from pre-implementation of the reverse triage protocol to 6 months after the implementation of the protocol. The implementation of a reverse triage in an outpatient setting significantly increased clinic efficiency as noted by a significant increase in the number of patients seen during a clinic session.
Simulation Study of Evacuation Control Center Operations Analysis
2011-06-01
28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9
An analysis of running skyline load path.
Ward W. Carson; Charles N. Mann
1971-01-01
This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
Solar electric geocentric transfer with attitude constraints: Analysis
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Malchow, H. L.; Delbaum, T. N.
1975-01-01
A time optimal or nearly time optimal trajectory program was developed for solar electric geocentric transfer with or without attitude constraints and with an optional initial high thrust stage. The method of averaging reduces computation time. A nonsingular set of orbital elements is used. The constraints, which are those of one of the SERT-C designs, introduce complexities into the analysis and the solution yields possible discontinuous changes in thrust direction. The power degradation due to VanAllen radiation is modeled analytically. A wide range of solar cell characteristics is assumed. Effects such as oblateness and shadowing are included. The analysis and the results of many example runs are included.
NASA Astrophysics Data System (ADS)
Kurade, S. S.; Ramteke, A. A.
2018-05-01
In this work, we have investigated the rate of reaction by using ionic strength at different temperatures. The main goal of this experiment is to determine the relation between ionic strength with reaction rate, reaction time and rate constant with temperature. It is observed that the addition of positive salt indicate the increasing ionic strength with increase in run time at various temperatures. Thus the temperature affects the speed of reaction and mechanism by which chemical reaction occurs and time variable plays vital role in the progress of reaction at different temperatures.
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-12-01
The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.
Online Meta-data Collection and Monitoring Framework for the STAR Experiment at RHIC
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.; Betts, W.; Van Buren, G.
2012-12-01
The STAR Experiment further exploits scalable message-oriented model principles to achieve a high level of control over online data streams. In this paper we present an AMQP-powered Message Interface and Reliable Architecture framework (MIRA), which allows STAR to orchestrate the activities of Meta-data Collection, Monitoring, Online QA and several Run-Time and Data Acquisition system components in a very efficient manner. The very nature of the reliable message bus suggests parallel usage of multiple independent storage mechanisms for our meta-data. We describe our experience with a robust data-taking setup employing MySQL- and HyperTable-based archivers for meta-data processing. In addition, MIRA has an AJAX-enabled web GUI, which allows real-time visualisation of online process flow and detector subsystem states, and doubles as a sophisticated alarm system when combined with complex event processing engines like Esper, Borealis or Cayuga. The performance data and our planned path forward are based on our experience during the 2011-2012 running of STAR.
Bichon, E; Guiffard, I; Vénisseau, A; Lesquin, E; Vaccher, V; Brosseaud, A; Marchand, P; Le Bizec, B
2016-08-12
A gas chromatography tandem mass spectrometry method using atmospheric pressure chemical ionisation was developed for the monitoring of 16 brominated flame retardants (7 usually monitored polybromodiphenylethers (PBDEs) and BDE #209 and 8 additional emerging and novel BFRs) in food and feed of animal origin. The developed analytical method has decreased the run time by three compared to conventional strategies, using a 2.5m column length (5% phenyl stationary phase, 0.1mm i.d., 0.1μmf.t.), a pulsed split injection (1:5) with carrier gas helium flow rate at 0.48mLmin(-1) in one run of 20 min. For most BFRs, analytical data were compared with the current analytical strategy relying on GC/EI/HRMS (double sector, R=10000 at 10% valley). Performances in terms of sensitivity were found to meet the Commission recommendation (118/2014/EC) for nBFRs. GC/APCI/MS/MS represents a promising alternative for multi-BFRs analysis in complex matrices, in that it allows the monitoring of a wider list of contaminants in a single injection and a shorter run time. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
NASA Astrophysics Data System (ADS)
McGregor, Stephen J.; Busa, Michael A.; Skufca, Joseph; Yaggie, James A.; Bollt, Erik M.
2009-06-01
Regularity statistics have been previously applied to walking gait measures in the hope of gaining insight into the complexity of gait under different conditions and in different populations. Traditional regularity statistics are subject to the requirement of stationarity, a limitation for examining changes in complexity under dynamic conditions such as exhaustive exercise. Using a novel measure, control entropy (CE), applied to triaxial continuous accelerometry, we report changes in complexity of walking and running during increasing speeds up to exhaustion in highly trained runners. We further apply Karhunen-Loeve analysis in a new and novel way to the patterns of CE responses in each of the three axes to identify dominant modes of CE responses in the vertical, mediolateral, and anterior/posterior planes. The differential CE responses observed between the different axes in this select population provide insight into the constraints of walking and running in those who may have optimized locomotion. Future comparisons between athletes, healthy untrained, and clinical populations using this approach may help elucidate differences between optimized and diseased locomotor control.
Hu, Guoqing; Mizuguchi, Tatsuya; Zhao, Xin; Minamikawa, Takeo; Mizuno, Takahiko; Yang, Yuli; Li, Cui; Bai, Ming; Zheng, Zheng; Yasui, Takeshi
2017-01-01
A single, free-running, dual-wavelength mode-locked, erbium-doped fibre laser was exploited to measure the absolute frequency of continuous-wave terahertz (CW-THz) radiation in real time using dual THz combs of photo-carriers (dual PC-THz combs). Two independent mode-locked laser beams with different wavelengths and different repetition frequencies were generated from this laser and were used to generate dual PC-THz combs having different frequency spacings in photoconductive antennae. Based on the dual PC-THz combs, the absolute frequency of CW-THz radiation was determined with a relative precision of 1.2 × 10−9 and a relative accuracy of 1.4 × 10−9 at a sampling rate of 100 Hz. Real-time determination of the absolute frequency of CW-THz radiation varying over a few tens of GHz was also demonstrated. Use of a single dual-wavelength mode-locked fibre laser, in place of dual mode-locked lasers, greatly reduced the size, complexity, and cost of the measurement system while maintaining the real-time capability and high measurement precision. PMID:28186148
Leisure-time running reduces all-cause and cardiovascular mortality risk.
Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N
2014-08-05
Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk
Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.
2014-01-01
Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581
NSTX-U Advances in Real-Time C++11 on Linux
NASA Astrophysics Data System (ADS)
Erickson, Keith G.
2015-08-01
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.
NASA Astrophysics Data System (ADS)
Metcalfe, Peter; Beven, Keith; Hankin, Barry; Lamb, Rob
2018-04-01
Enhanced hillslope storage is utilised in natural
flood management in order to retain overland storm run-off and to reduce connectivity between fast surface flow pathways and the channel. Examples include excavated ponds, deepened or bunded accumulation areas, and gullies and ephemeral channels blocked with wooden barriers or debris dams. The performance of large, distributed networks of such measures is poorly understood. Extensive schemes can potentially retain large quantities of run-off, but there are indications that much of their effectiveness can be attributed to desynchronisation of sub-catchment flood waves. Inappropriately sited measures may therefore increase, rather than mitigate, flood risk. Fully distributed hydrodynamic models have been applied in limited studies but introduce significant computational complexity. The longer run times of such models also restrict their use for uncertainty estimation or evaluation of the many potential configurations and storm sequences that may influence the timings and magnitudes of flood waves. Here a simplified overland flow-routing module and semi-distributed representation of enhanced hillslope storage is developed. It is applied to the headwaters of a large rural catchment in Cumbria, UK, where the use of an extensive network of storage features is proposed as a flood mitigation strategy. The models were run within a Monte Carlo framework against data for a 2-month period of extreme flood events that caused significant damage in areas downstream. Acceptable realisations and likelihood weightings were identified using the GLUE uncertainty estimation framework. Behavioural realisations were rerun against the catchment model modified with the addition of the hillslope storage. Three different drainage rate parameters were applied across the network of hillslope storage. The study demonstrates that schemes comprising widely distributed hillslope storage can be modelled effectively within such a reduced complexity framework. It shows the importance of drainage rates from storage features while operating through a sequence of events. We discuss limitations in the simplified representation of overland flow-routing and representation and storage, and how this could be improved using experimental evidence. We suggest ways in which features could be grouped more strategically and thus improve the performance of such schemes.
Runtime verification of embedded real-time systems.
Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg
We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.
Low dose tomographic fluoroscopy: 4D intervention guidance with running prior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Barbara; Kuntz, Jan; Brehm, Marcus
Purpose: Today's standard imaging technique in interventional radiology is the single- or biplane x-ray fluoroscopy which delivers 2D projection images as a function of time (2D+T). This state-of-the-art technology, however, suffers from its projective nature and is limited by the superposition of the patient's anatomy. Temporally resolved tomographic volumes (3D+T) would significantly improve the visualization of complex structures. A continuous tomographic data acquisition, if carried out with today's technology, would yield an excessive patient dose. Recently the authors proposed a method that enables tomographic fluoroscopy at the same dose level as projective fluoroscopy which means that if scanning time ofmore » an intervention guided by projective fluoroscopy is the same as that of an intervention guided by tomographic fluoroscopy, almost the same dose is administered to the patient. The purpose of this work is to extend authors' previous work and allow for patient motion during the intervention.Methods: The authors propose the running prior technique for adaptation of a prior image. This adaptation is realized by a combination of registration and projection replacement. In a first step the prior is deformed to the current position via affine and deformable registration. Then the information from outdated projections is replaced by newly acquired projections using forward and backprojection steps. The thus adapted volume is the running prior. The proposed method is validated by simulated as well as measured data. To investigate motion during intervention a moving head phantom was simulated. Real in vivo data of a pig are acquired by a prototype CT system consisting of a flat detector and a continuously rotating clinical gantry.Results: With the running prior technique it is possible to correct for motion without additional dose. For an application in intervention guidance both steps of the running prior technique, registration and replacement, are necessary. Reconstructed volumes based on the running prior show high image quality without introducing new artifacts and the interventional materials are displayed at the correct position.Conclusions: The running prior improves the robustness of low dose 3D+T intervention guidance toward intended or unintended patient motion.« less
Sustained Accelerated Idioventricular Rhythm in a Centrifuge-Simulated Suborbital Spaceflight.
Suresh, Rahul; Blue, Rebecca S; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M
2017-08-01
Hypergravitational exposures during human centrifugation are known to provoke dysrhythmias, including sinus dysrhythmias/tachycardias, premature atrial/ventricular contractions, and even atrial fibrillations or flutter patterns. However, events are generally short-lived and resolve rapidly after cessation of acceleration. This case report describes a prolonged ectopic ventricular rhythm in response to high G exposure. A previously healthy 30-yr-old man voluntarily participated in centrifuge trials as a part of a larger study, experiencing a total of 7 centrifuge runs over 48 h. Day 1 consisted of two +Gz runs (peak +3.5 Gz, run 2) and two +Gx runs (peak +6.0 Gx, run 4). Day 2 consisted of three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz). Hemodynamic data collected included blood pressure, heart rate, and continuous three-lead electrocardiogram. Following the final acceleration exposure of the last Day 2 run (peak +4.5 Gx and +4.0 Gz combined, resultant +6.0 G), during a period of idle resting centrifuge activity (resultant vector +1.4 G), the subject demonstrated a marked change in his three-lead electrocardiogram from normal sinus rhythm to a wide-complex ectopic ventricular rhythm at a rate of 91-95 bpm, consistent with an accelerated idioventricular rhythm (AIVR). This rhythm was sustained for 2 m, 24 s before reversion to normal sinus. The subject reported no adverse symptoms during this time. While prolonged, the dysrhythmia was asymptomatic and self-limited. AIVR is likely a physiological response to acceleration and can be managed conservatively. Vigilance is needed to ensure that AIVR is correctly distinguished from other, malignant rhythms to avoid inappropriate treatment and negative operational impacts.Suresh R, Blue RS, Mathers C, Castleberry TL, Vanderploeg JM. Sustained accelerated idioventricular rhythm in a centrifuge-simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(8):789-793.
Real time animation of space plasma phenomena
NASA Technical Reports Server (NTRS)
Jordan, K. F.; Greenstadt, E. W.
1987-01-01
In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images.
Whole-body Motion Planning with Simple Dynamics and Full Kinematics
2014-08-01
optimizations can take an excessively long time to run, and may also suffer from local minima. Thus, this approach can become intractable for complex robots...motions like jumping and climbing. Additionally, the point-mass model suggests that the centroidal angular momentum is zero, which is not valid for motions...use in the DARPA Robotics Challenge. A. Jumping Our first example is to command the robot to jump off the ground, as illustrated in Fig.4. We assign
NASA Astrophysics Data System (ADS)
Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere
2006-02-01
To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.
Gender difference and age-related changes in performance at the long-distance duathlon.
Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver
2013-02-01
The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.
A learning approach to the bandwidth multicolouring problem
NASA Astrophysics Data System (ADS)
Akbari Torkestani, Javad
2016-05-01
In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
SIM_EXPLORE: Software for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.
2013-01-01
Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.
Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan
2016-01-01
In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.
NASA Astrophysics Data System (ADS)
Franzoni, G.; Norkus, A.; Pol, A. A.; Srimanobhas, N.; Walker, J.
2017-10-01
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
AGATE: Adversarial Game Analysis for Tactical Evaluation
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L.
2013-01-01
AGATE generates a set of ranked strategies that enables an autonomous vehicle to track/trail another vehicle that is trying to break the contact using evasive tactics. The software is efficient (can be run on a laptop), scales well with environmental complexity, and is suitable for use onboard an autonomous vehicle. The software will run in near-real-time (2 Hz) on most commercial laptops. Existing software is usually run offline in a planning mode, and is not used to control an unmanned vehicle actively. JPL has developed a system for AGATE that uses adversarial game theory (AGT) methods (in particular, leader-follower and pursuit-evasion) to enable an autonomous vehicle (AV) to maintain tracking/ trailing operations on a target that is employing evasive tactics. The AV trailing, tracking, and reacquisition operations are characterized by imperfect information, and are an example of a non-zero sum game (a positive payoff for the AV is not necessarily an equal loss for the target being tracked and, potentially, additional adversarial boats). Previously, JPL successfully applied the Nash equilibrium method for onboard control of an autonomous ground vehicle (AGV) travelling over hazardous terrain.
Real-time plasma control based on the ISTTOK tomography diagnostica)
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.
2008-10-01
The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.
Mean platelet volume (MPV) predicts middle distance running performance.
Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico
2014-01-01
Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running performance. The significant association between baseline MPV and running time suggest that hyperactive platelets may exert some pleiotropic effects on endurance performance.
Parallelization of a hydrological model using the message passing interface
Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji
2013-01-01
With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.
NASA Astrophysics Data System (ADS)
Skala, Vaclav
2016-06-01
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.
A Software Architecture for Adaptive Modular Sensing Systems
Lyle, Andrew C.; Naish, Michael D.
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614
A software architecture for adaptive modular sensing systems.
Lyle, Andrew C; Naish, Michael D
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.
Running MONET and SALT with Remote Telescope Markup Language 3.0
NASA Astrophysics Data System (ADS)
Hessman, F. V.; Romero, E.
2003-05-01
Complex robotic and service observations in heterogenous networks of telescopes require a common telescopic lingua franca for the description and transport of observing requests and results. Building upon the experience gained within the Hands-On Universe (HOU) and advanced amateur communities with Remote Telescope Markup Language (RTML) Version 2.1 (http://sunra.lbl.gov/rtml), we have implemented a revised RTML syntax (Version 3.0) which is fully capable of - running the two 1.2m MONET robotic telescopes for a very inhomogeneous clientel from 3 research institutions and high school classes all over the world; - connecting MONET to the HOU telescope network; - connecting MONET as a trigger to the 11m SALT telescope; - providing all the objects needed to perform and document internet-based user support, ranging all the way from proposal submission and time-allocation to observation reports.
Schütte, Kurt H; Seerden, Stefan; Venter, Rachel; Vanwanseele, Benedicte
2018-01-01
Medial tibial stress syndrome (MTSS) is a common overuse running injury with pathomechanics likely to be exaggerated by fatigue. Wearable accelerometry provides a novel alternative to assess biomechanical parameters continuously while running in more ecologically valid settings. The purpose of this study was to determine the influence of outdoor running fatigue and MTSS on both dynamic loading and dynamic stability derived from trunk and tibial accelerometery. Runners with (n=14) and without (n=16) history of MTSS performed an outdoor fatigue run of 3200m. Accelerometer-based measures averaged per lap included dynamic loading of the trunk and tibia (i.e. axial peak positive acceleration, signal power magnitude, and shock attenuation) as well as dynamic trunk stability (i.e. tri-axial root mean square ratio, step and stride regularity, and sample entropy). Regression coefficients from generalised estimating equations were used to evaluate group by fatigue interactions. No evidence could be found for dynamic loading being higher with fatigue in runners with MTSS history (all measures p>0.05). One significant group by running fatigue interaction effect was detected for dynamic stability. Specifically, in MTSS only, decreases mediolateral sample entropy i.e. loss of complexity was associated with running fatigue (p<0.01). The current results indicate that entire acceleration waveform signals reflecting mediolateral trunk control is related to MTSS history, a compensation that went undetected in the non-fatigued running state. We suggest that a practical outdoor running fatigue protocol that concurrently captures trunk accelerometry-based movement complexity warrants further prospective investigation as an in-situ screening tool for MTSS individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Reduze - Feynman integral reduction in C++
NASA Astrophysics Data System (ADS)
Studerus, C.
2010-07-01
Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.
Massoud, Walid; Thanigasalam, Ruban; El Hajj, Albert; Girard, Frederic; Théveniaud, Pierre Etienne; Chatellier, Gilles; Baumert, Hervé
2013-07-01
To evaluate the use of a single needle driver with the V-Loc (Covidien, Dublin, Ireland) running suture and compare this with the use of 2 needle drivers with polyglactin interrupted sutures (IS) in dividing the dorsal venous complex (DVC) and forming the urethrovesical anastomosis (UVA) during robot-assisted radical prostatectomy (RARP). A prospective cohort study was performed to compare V-Loc (n = 40) with polyglactin (n = 40) sutures. Division of the dorsal venous complex and formation of the UVA during robot-assisted radical prostatectomy using V-Loc or polyglactin sutures were studied. Preoperative, intraoperative, and postoperative parameters were measured. V-Loc sutures were associated with a statistically significant reduction in mean dorsal vein suture time (3.15 minutes V-Loc vs 3.75 minutes IS, P = .02) and UVA anastomosis time (8.5 minutes V-Loc vs 11.5 minutes IS, P = .001). No significant difference was noted between operative time (121 minutes V-Loc vs 130 minutes IS, P = .199), delayed healing rates (5% V-Loc vs 7.5% IS, P = .238), continence rate at 12 months (97.5% V-Loc vs 95% IS, P = .368), and urethral stenosis rates (2.5% V-Loc vs 2.5% IS, P = .347) in both groups. The use of a V-Loc running suture with a single needle driver is a feasible, reproducible, and economic technique with no significant difference in continence rates and urethral stenosis rates, compared with the use of a traditional interrupted suture. Copyright © 2013 Elsevier Inc. All rights reserved.
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
Nestly--a framework for running software with nested parameter choices and aggregating results.
McCoy, Connor O; Gallagher, Aaron; Hoffman, Noah G; Matsen, Frederick A
2013-02-01
The execution of a software application or pipeline using various combinations of parameters and inputs is a common task in bioinformatics. In the absence of a specialized tool to organize, streamline and formalize this process, scientists must write frequently complex scripts to perform these tasks. We present nestly, a Python package to facilitate running tools with nested combinations of parameters and inputs. nestly provides three components. First, a module to build nested directory structures corresponding to choices of parameters. Second, the nestrun script to run a given command using each set of parameter choices. Third, the nestagg script to aggregate results of the individual runs into a CSV file, as well as support for more complex aggregation. We also include a module for easily specifying nested dependencies for the SCons build tool, enabling incremental builds. Source, documentation and tutorial examples are available at http://github.com/fhcrc/nestly. nestly can be installed from the Python Package Index via pip; it is open source (MIT license).
Haghighinejad, Hourvash Akbari; Kharazmi, Erfan; Hatam, Nahid; Yousefi, Sedigheh; Hesami, Seyed Ali; Danaei, Mina; Askarian, Mehrdad
2016-01-01
Background: Hospital emergencies have an essential role in health care systems. In the last decade, developed countries have paid great attention to overcrowding crisis in emergency departments. Simulation analysis of complex models for which conditions will change over time is much more effective than analytical solutions and emergency department (ED) is one of the most complex models for analysis. This study aimed to determine the number of patients who are waiting and waiting time in emergency department services in an Iranian hospital ED and to propose scenarios to reduce its queue and waiting time. Methods: This is a cross-sectional study in which simulation software (Arena, version 14) was used. The input information was extracted from the hospital database as well as through sampling. The objective was to evaluate the response variables of waiting time, number waiting and utilization of each server and test the three scenarios to improve them. Results: Running the models for 30 days revealed that a total of 4088 patients left the ED after being served and 1238 patients waited in the queue for admission in the ED bed area at end of the run (actually these patients received services out of their defined capacity). The first scenario result in the number of beds had to be increased from 81 to179 in order that the number waiting of the “bed area” server become almost zero. The second scenario which attempted to limit hospitalization time in the ED bed area to the third quartile of the serving time distribution could decrease the number waiting to 586 patients. Conclusion: Doubling the bed capacity in the emergency department and consequently other resources and capacity appropriately can solve the problem. This includes bed capacity requirement for both critically ill and less critically ill patients. Classification of ED internal sections based on severity of illness instead of medical specialty is another solution. PMID:26793727
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
Space and Time Partitioning with Hardware Support for Space Applications
NASA Astrophysics Data System (ADS)
Pinto, S.; Tavares, A.; Montenegro, S.
2016-08-01
Complex and critical systems like airplanes and spacecraft implement a very fast growing amount of functions. Typically, those systems were implemented with fully federated architectures, but the number and complexity of desired functions of todays systems led aerospace industry to follow another strategy. Integrated Modular Avionics (IMA) arose as an attractive approach for consolidation, by combining several applications into one single generic computing resource. Current approach goes towards higher integration provided by space and time partitioning (STP) of system virtualization. The problem is existent virtualization solutions are not ready to fully provide what the future of aerospace are demanding: performance, flexibility, safety, security while simultaneously containing Size, Weight, Power and Cost (SWaP-C).This work describes a real time hypervisor for space applications assisted by commercial off-the-shell (COTS) hardware. ARM TrustZone technology is exploited to implement a secure virtualization solution with low overhead and low memory footprint. This is demonstrated by running multiple guest partitions of RODOS operating system on a Xilinx Zynq platform.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Spike-Timing of Orbitofrontal Neurons Is Synchronized With Breathing.
Kőszeghy, Áron; Lasztóczi, Bálint; Forro, Thomas; Klausberger, Thomas
2018-01-01
The orbitofrontal cortex (OFC) has been implicated in a multiplicity of complex brain functions, including representations of expected outcome properties, post-decision confidence, momentary food-reward values, complex flavors and odors. As breathing rhythm has an influence on odor processing at primary olfactory areas, we tested the hypothesis that it may also influence neuronal activity in the OFC, a prefrontal area involved also in higher order processing of odors. We recorded spike timing of orbitofrontal neurons as well as local field potentials (LFPs) in awake, head-fixed mice, together with the breathing rhythm. We observed that a large majority of orbitofrontal neurons showed robust phase-coupling to breathing during immobility and running. The phase coupling of action potentials to breathing was significantly stronger in orbitofrontal neurons compared to cells in the medial prefrontal cortex. The characteristic synchronization of orbitofrontal neurons with breathing might provide a temporal framework for multi-variable processing of olfactory, gustatory and reward-value relationships.
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.
Viker, Tomas; Richardson, Matt X
2013-01-01
Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.
NASA Astrophysics Data System (ADS)
Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.
The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Li, Tsung-Han; Lin, Hung-Yu; Chen, Kao-Tun; Wu, Chun-Sheng; Lai, Ya-Chieh; Hurat, Philippe
2018-03-01
Along with process improvement and integrated circuit (IC) design complexity increased, failure rate caused by optical getting higher in the semiconductor manufacture. In order to enhance chip quality, optical proximity correction (OPC) plays an indispensable rule in the manufacture industry. However, OPC, includes model creation, correction, simulation and verification, is a bottleneck from design to manufacture due to the multiple iterations and advanced physical behavior description in math. Thus, this paper presented a pattern-based design technology co-optimization (PB-DTCO) flow in cooperation with OPC to find out patterns which will negatively affect the yield and fixed it automatically in advance to reduce the run-time in OPC operation. PB-DTCO flow can generate plenty of test patterns for model creation and yield gaining, classify candidate patterns systematically and furthermore build up bank includes pairs of match and optimization patterns quickly. Those banks can be used for hotspot fixing, layout optimization and also be referenced for the next technology node. Therefore, the combination of PB-DTCO flow with OPC not only benefits for reducing the time-to-market but also flexible and can be easily adapted to diversity OPC flow.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.
2017-01-01
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930
Lidierth, Malcolm
2005-02-15
This paper describes software that runs in the Spike2 for Windows environment and provides a versatile tool for generating stimuli during data acquisition from the 1401 family of interfaces (CED, UK). A graphical user interface (GUI) is used to provide dynamic control of stimulus timing. Both single stimuli and trains of stimuli can be generated. The pulse generation routines make use of programmable variables within the interface and allow these to be rapidly changed during an experiment. The routines therefore provide the ease-of-use associated with external, stand-alone pulse generators. Complex stimulus protocols can be loaded from an external text file and facilities are included to create these files through the GUI. The software consists of a Spike2 script that runs in the host PC, and accompanying routines written in the 1401 sequencer control code, that run in the 1401 interface. Handshaking between the PC and the interface card are built into the routines and provides for full integration of sampling, analysis and stimulus generation during an experiment. Control of the 1401 digital-to-analogue converters is also provided; this allows control of stimulus amplitude as well as timing and also provides a sample-hold feature that may be used to remove DC offsets and drift from recorded data.
Optimal chemotaxis in intermittent migration of animal cells
NASA Astrophysics Data System (ADS)
Romanczuk, P.; Salbreux, G.
2015-04-01
Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument. The coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. Static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype are described in the last part of this thesis. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed. Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about six months. Since current sensitivity of advanced LIGO is already more than a factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, the upcoming science runs have a good chance for the first direct detection of gravitational waves.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
NASA Astrophysics Data System (ADS)
Fawzy, Wafaa M.
2010-10-01
A FORTRAN code is developed for simulation and fitting the fine structure of a planar weakly-bonded open-shell complex that consists of a diatomic radical in a Σ3 electronic state and a diatomic or a polyatomic closed-shell molecule. The program sets up the proper total Hamiltonian matrix for a given J value and takes account of electron-spin-electron-spin, electron-spin rotation interactions, and the quartic and sextic centrifugal distortion terms within the complex. Also, R-dependence of electron-spin-electron-spin and electron-spin rotation couplings are considered. The code does not take account of effects of large-amplitude internal rotation of the diatomic radical within the complex. It is assumed that the complex has a well defined equilibrium geometry so that effects of large amplitude motion are negligible. Therefore, the computer code is suitable for a near-rigid rotor. Numerical diagonalization of the matrix provides the eigenvalues and the eigenfunctions that are necessary for calculating energy levels, frequencies, relative intensities of infrared or microwave transitions, and expectation values of the quantum numbers within the complex. Goodness of all the quantum numbers, with exception of J and parity, depends on relative sizes of the product of the rotational constants and quantum numbers (i.e. BJ, CJ, and AK), electron-spin-electron-spin, and electron-spin rotation couplings, as well as the geometry of the complex. Therefore, expectation values of the quantum numbers are calculated in the eigenfunctions basis of the complex. The computational time for the least squares fits has been significantly reduced by using the Hellman-Feynman theory for calculating the derivatives. The computer code is useful for analysis of high resolution infrared and microwave spectra of a planar near-rigid weakly-bonded open-shell complex that contains a diatomic fragment in a Σ3 electronic state and a closed-shell molecule. The computer program was successfully applied to analysis and fitting the observed high resolution infrared spectra of the O 2sbnd HF/O 2sbnd DF and O 2sbnd N 2O complexes. Test input file for simulation and fitting the high resolution infrared spectrum of the O 2sbnd DF complex is provided. Program summaryProgram title: TSIG_COMP Catalogue identifier: AEGM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 030 No. of bytes in distributed program, including test data, etc.: 51 663 Distribution format: tar.gz Programming language: Fortran 90, free format Computer: SGI Origin 3400, workstations and PCs Operating system: Linux, UNIX and Windows (see Restrictions below) RAM: Case dependent Classification: 16.2 Nature of problem: TSIG_COMP calculates frequencies, relative intensities, and expectation values of the various quantum numbers and parities of bound states involved in allowed ro-vibrational transitions in semi-rigid planar weakly-bonded open-shell complexes. The complexes of interest contain a free radical in a Σ3 state and a closed-shell partner, where the electron-spin-electron-spin interaction, electron-spin rotation interaction, and centrifugal forces significantly modify the spectral patterns. To date, ab initio methods are incapable of taking these effects into account to provide accurate predictions for the ro-vibrational energy levels of the complexes of interest. In the TSIG_COMP program, the problem is solved by using the proper effective Hamiltonian and molecular basis set. Solution method: The program uses a Hamiltonian operator that takes into account vibration, end-over-end rotation, electron-spin-electron-spin and electron-spin rotation interactions as well as the various centrifugal distortion terms. The Hamiltonian operator and the molecular basis set are used to set up the Hamiltonian matrix in the inertial axis system of the complex of interest. Diagonalization of the Hamiltonian matrix provides the eigenvalues and the eigenfunctions for the bound ro-vibrational states. These eigenvalues and eigenfunctions are used to calculate frequencies and relative intensities of the allowed infrared or microwave transitions as well as expectation values of all the quantum numbers and parities of states involved in the transitions. The program employs the method of least squares fits to fit the observed frequencies to the calculated frequencies to provide the molecular parameters that determine the geometry of the complex of interest. Restrictions: The number of transitions and parameters included in the fits is limited to 80 parameters and 200 transitions. However, these numbers can be increased by adjusting dimensions of the arrays (not recommended). Running the program under MS windows is recommended for simulations of any number of transitions and for fitting a relatively small number of parameters and transitions (maximum 15 parameters and 82 transitions), for fitting larger number of parameters run time error may occur. Because spectra of weakly bonded complexes are recorded at low temperatures, in most of cases fittings can be performed under MS windows. Running time: Problem-dependent. The provided test input for Linux fits 82 transitions and 21 parameters, the actual run time is 62 minutes. The provided test input file for MS windows fits 82 transitions and 15 parameters; the actual runtime is 5 minutes.
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
Brouwers, Bram; Stephens, Natalie A.; Costford, Sheila R.; Hopf, Meghan E.; Ayala, Julio E.; Yi, Fanchao; Xie, Hui; Li, Jian-Liang; Gardell, Stephen J.; Sparks, Lauren M.; Smith, Steven R.
2018-01-01
Mice overexpressing NAMPT in skeletal muscle (NamptTg mice) develop higher exercise endurance and maximal aerobic capacity (VO2max) following voluntary exercise training compared to wild-type (WT) mice. Here, we aimed to investigate the mechanisms underlying by determining skeletal muscle mitochondrial respiratory capacity in NamptTg and WT mice. Body weight and body composition, tissue weight (gastrocnemius, quadriceps, soleus, heart, liver, and epididymal white adipose tissue), skeletal muscle and liver glycogen content, VO2max, skeletal muscle mitochondrial respiratory capacity (measured by high-resolution respirometry), skeletal muscle gene expression (measured by microarray and qPCR), and skeletal muscle protein content (measured by Western blot) were determined following 6 weeks of voluntary exercise training (access to running wheel) in 13-week-old male NamptTg (exercised NamptTg) mice and WT (exercised WT) mice. Daily running distance and running time during the voluntary exercise training protocol were recorded. Daily running distance (p = 0.51) and running time (p = 0.85) were not significantly different between exercised NamptTg mice and exercised WT mice. VO2max was higher in exercised NamptTg mice compared to exercised WT mice (p = 0.02). Body weight (p = 0.92), fat mass (p = 0.49), lean mass (p = 0.91), tissue weight (all p > 0.05), and skeletal muscle (p = 0.72) and liver (p = 0.94) glycogen content were not significantly different between exercised NamptTg mice and exercised WT mice. Complex I oxidative phosphorylation (OXPHOS) respiratory capacity supported by fatty acid substrates (p < 0.01), maximal (complex I+II) OXPHOS respiratory capacity supported by glycolytic (p = 0.02) and fatty acid (p < 0.01) substrates, and maximal uncoupled respiratory capacity supported by fatty acid substrates (p < 0.01) was higher in exercised NamptTg mice compared to exercised WT mice. Transcriptomic analyses revealed differential expression for genes involved in oxidative metabolism in exercised NamptTg mice compared to exercised WT mice, specifically, enrichment for the gene set related to the SIRT3-mediated signaling pathway. SIRT3 protein content correlated with NAMPT protein content (r = 0.61, p = 0.04). In conclusion, NamptTg mice develop higher exercise capacity following voluntary exercise training compared to WT mice, which is paralleled by higher mitochondrial respiratory capacity in skeletal muscle. The changes in SIRT3 targets suggest that these effects are due to remodeling of mitochondrial function. PMID:29942262
Simplified programming and control of automated radiosynthesizers through unit operations.
Claggett, Shane B; Quinn, Kevin M; Lazari, Mark; Moore, Melissa D; van Dam, R Michael
2013-07-15
Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client-server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client-server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client-server architecture provided robustness and flexibility.
Simplified programming and control of automated radiosynthesizers through unit operations
2013-01-01
Background Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Methods Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client–server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. Results The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client–server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. Conclusions We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client–server architecture provided robustness and flexibility. PMID:23855995
NSTX-U Advances in Real-Time C++11 on Linux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Keith G.
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
NSTX-U Advances in Real-Time C++11 on Linux
Erickson, Keith G.
2015-08-14
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
Effect of time span and task load on pilot mental workload
NASA Technical Reports Server (NTRS)
Berg, S. L.; Sheridan, T. B.
1985-01-01
Two sets of experiments were run to examine how the mental workload of a pilot might be measured. The effects of continuous manual control activity versus discrete assigned mental tasks (including the length of time between receiving an assignment and executing it) were examined. The first experiment evaluated the strengths and weaknesses of measuring mental workload with an objective perforamance (altitude deviations) and five subjective ratings (activity level, complexity, difficulty, stress, and workload). The second set of experiments built upon the first set by increasing workload intensities and adding another performance measure: airspeed deviation. The results are discussed for both low and high experience pilots.
Williams, Paul T
2012-01-01
Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.
Zanatta, Lucia; Valori, Laura; Cappelletto, Eleonora; Pozzebon, Maria Elena; Pavan, Elisabetta; Dei Tos, Angelo Paolo; Merkle, Dennis
2015-02-01
In the modern molecular diagnostic laboratory, cost considerations are of paramount importance. Automation of complex molecular assays not only allows a laboratory to accommodate higher test volumes and throughput but also has a considerable impact on the cost of testing from the perspective of reagent costs, as well as hands-on time for skilled laboratory personnel. The following study tracked the cost of labor (hands-on time) and reagents for fluorescence in situ hybridization (FISH) testing in a routine, high-volume pathology and cytogenetics laboratory in Treviso, Italy, over a 2-y period (2011-2013). The laboratory automated FISH testing with the VP 2000 Processor, a deparaffinization, pretreatment, and special staining instrument produced by Abbott Molecular, and compared hands-on time and reagent costs to manual FISH testing. The results indicated significant cost and time saving when automating FISH with VP 2000 when more than six FISH tests were run per week. At 12 FISH assays per week, an approximate total cost reduction of 55% was observed. When running 46 FISH specimens per week, the cost saving increased to 89% versus manual testing. The results demonstrate that the VP 2000 processor can significantly reduce the cost of FISH testing in diagnostic laboratories. © 2014 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Al Haddad, Hani; Méndez-Villanueva, Alberto; Torreño, Nacho; Munguía-Izquierdo, Diego; Suárez-Arrones, Luis
2017-09-22
The aim of this study was to assess the match-to-match variability obtained using GPS devices, collected during official games in professional soccer players. GPS-derived data from nineteen elite soccer players were collected over two consecutive seasons. Time-motion data for players with more than five full-match were analyzed (n=202). Total distance covered (TD), TD >13-18 km/h, TD >18-21 km/h, TD >21 km/h, number of acceleration >2.5-4 m.s-2 and >4 m.s-2 were calculated. The match-to-match variation in running activity was assessed by the typical error expressed as a coefficient of variation (CV,%) and the magnitude of the CV was calculated (effect size). When all players were pooled together, CVs ranged from 5% to 77% (first half) and from 5% to 90% (second half), for TD and number of acceleration >4 m.s-2, and the magnitude of the CVs were rated from small to moderate (effect size = 0.57-0.98). The CVs were likely to increase with running/acceleration intensity, and were likely to differ between playing positions (e.g., TD > 13-18 km/h 3.4% for second strikers vs 14.2% for strikers and 14.9% for wide-defenders vs 9.7% for wide-midfielders). Present findings indicate that variability in players' running performance is high in some variables and likely position-dependent. Such variability should be taken into account when using these variables to prescribe and/or monitor training intensity/load. GPS-derived match-to-match variability in official games' locomotor performance of professional soccer players is high in some variables, particularly for high-speed running, due to the complexity of match running performance and its most influential factors and reliability of the devices.
Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.
Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M
2015-05-01
Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
NASA Astrophysics Data System (ADS)
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.; Yoshimura, A.; Butler, D.
1996-11-01
This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C++, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timing calculationsmore » showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology. This report is published as two documents: Model Overview and Domain Analysis. A separate Kaiser-proprietary report contains the Disease and Health Care Organization Selection Models.« less
16 CFR 803.10 - Running of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
Altered Running Economy Directly Translates to Altered Distance-Running Performance.
Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger
2016-11-01
Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, D.; Yoshimura, A.; Butler, D.
This report describes the results of a Cooperative Research and Development Agreement between Sandia National Laboratories and Kaiser Permanente Southern California to develop a prototype computer model of Kaiser Permanente`s health care delivery system. As a discrete event simulation, SimHCO models for each of 100,000 patients the progression of disease, individual resource usage, and patient choices in a competitive environment. SimHCO is implemented in the object-oriented programming language C{sup 2}, stressing reusable knowledge and reusable software components. The versioned implementation of SimHCO showed that the object-oriented framework allows the program to grow in complexity in an incremental way. Furthermore, timingmore » calculations showed that SimHCO runs in a reasonable time on typical workstations, and that a second phase model will scale proportionally and run within the system constraints of contemporary computer technology.« less
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
Wojtusik, Mateusz; Zurita, Mauricio; Villar, Juan C; Ladero, Miguel; Garcia-Ochoa, Felix
2016-09-01
The effect of fluid dynamic conditions on enzymatic hydrolysis of acid pretreated corn stover (PCS) has been assessed. Runs were performed in stirred tanks at several stirrer speed values, under typical conditions of temperature (50°C), pH (4.8) and solid charge (20% w/w). A complex mixture of cellulases, xylanases and mannanases was employed for PCS saccharification. At low stirring speeds (<150rpm), estimated mass transfer coefficients and rates, when compared to chemical hydrolysis rates, lead to results that clearly show low mass transfer rates, being this phenomenon the controlling step of the overall process rate. However, for stirrer speed from 300rpm upwards, the overall process rate is controlled by hydrolysis reactions. The ratio between mass transfer and overall chemical reaction rates changes with time depending on the conditions of each run. Copyright © 2016 Elsevier Ltd. All rights reserved.
Numerical simulation of the pollution formed by exhaust jets at the ground running procedure
NASA Astrophysics Data System (ADS)
Korotaeva, T. A.; Turchinovich, A. O.
2016-10-01
The paper presents an approach that is new for aviation-related ecology. The approach allows defining spatial distribution of pollutant concentrations released at engine ground running procedure (GRP) using full gas-dynamic models. For the first time such a task is modeled in three-dimensional approximation in the framework of the numerical solution of the Navier-Stokes equations with taking into account a kinetic model of interaction between the components of engine exhaust and air. The complex pattern of gas-dynamic flow that occurs at the flow around an aircraft with the jet exhausts that interact with each other, air, jet blast deflector (JBD), and surface of the airplane has been studied in the present work. The numerical technique developed for calculating the concentrations of pollutants produced at the GRP stage permits to define level, character, and area of contamination more reliable and increase accuracy in definition of sanitary protection zones.
Running SW4 On New Commodity Technology Systems (CTS-1) Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, Arthur J.; Petersson, N. Anders; Pitarka, Arben
We have recently been running earthquake ground motion simulations with SW4 on the new capacity computing systems, called the Commodity Technology Systems - 1 (CTS-1) at Lawrence Livermore National Laboratory (LLNL). SW4 is a fourth order time domain finite difference code developed by LLNL and distributed by the Computational Infrastructure for Geodynamics (CIG). SW4 simulates seismic wave propagation in complex three-dimensional Earth models including anelasticity and surface topography. We are modeling near-fault earthquake strong ground motions for the purposes of evaluating the response of engineered structures, such as nuclear power plants and other critical infrastructure. Engineering analysis of structures requiresmore » the inclusion of high frequencies which can cause damage, but are often difficult to include in simulations because of the need for large memory to model fine grid spacing on large domains.« less
NASA Astrophysics Data System (ADS)
Goma, Sergio R.
2015-03-01
In current times, mobile technologies are ubiquitous and the complexity of problems is continuously increasing. In the context of advancement of engineering, we explore in this paper possible reasons that could cause a saturation in technology evolution - namely the ability of problem solving based on previous results and the ability of expressing solutions in a more efficient way, concluding that `thinking outside of brain' - as in solving engineering problems that are expressed in a virtual media due to their complexity - would benefit from mobile technology augmentation. This could be the necessary evolutionary step that would provide the efficiency required to solve new complex problems (addressing the `running out of time' issue) and remove the communication of results barrier (addressing the human `perception/expression imbalance' issue). Some consequences are discussed, as in this context the artificial intelligence becomes an automation tool aid instead of a necessary next evolutionary step. The paper concludes that research in modeling as problem solving aid and data visualization as perception aid augmented with mobile technologies could be the path to an evolutionary step in advancing engineering.
Geomagnetic effects caused by rocket exhaust jets
NASA Astrophysics Data System (ADS)
Lipko, Yuriy; Pashinin, Aleksandr; Khakhinov, Vitaliy; Rahmatulin, Ravil
2016-09-01
In the space experiment Radar-Progress, we have made 33 series of measurements of geomagnetic variations during ignitions of engines of Progress cargo spacecraft in low Earth orbit. We used magneto-measuring complexes, installed at observatories of the Institute of Solar-Terrestrial Physics of Siberian Branch of the Russian Academy of Sciences, and magnetotelluric equipment of a mobile complex. We assumed that engine running can cause geomagnetic disturbances in flux tubes crossed by the spacecraft. When analyzing experimental data, we took into account space weather factors: solar wind parameters, total daily mid-latitude geomagnetic activity index Kp, geomagnetic auroral electrojet index AE, global geomagnetic activity. The empirical data we obtained indicate that 18 of the 33 series showed geomagnetic variations in various time ranges.
Complexity transitions in global algorithms for sparse linear systems over finite fields
NASA Astrophysics Data System (ADS)
Braunstein, A.; Leone, M.; Ricci-Tersenghi, F.; Zecchina, R.
2002-09-01
We study the computational complexity of a very basic problem, namely that of finding solutions to a very large set of random linear equations in a finite Galois field modulo q. Using tools from statistical mechanics we are able to identify phase transitions in the structure of the solution space and to connect them to the changes in the performance of a global algorithm, namely Gaussian elimination. Crossing phase boundaries produces a dramatic increase in memory and CPU requirements necessary for the algorithms. In turn, this causes the saturation of the upper bounds for the running time. We illustrate the results on the specific problem of integer factorization, which is of central interest for deciphering messages encrypted with the RSA cryptosystem.
Dual-comb spectroscopy of water vapor with a free-running semiconductor disk laser.
Link, S M; Maas, D J H C; Waldburger, D; Keller, U
2017-06-16
Dual-comb spectroscopy offers the potential for high accuracy combined with fast data acquisition. Applications are often limited, however, by the complexity of optical comb systems. Here we present dual-comb spectroscopy of water vapor using a substantially simplified single-laser system. Very good spectroscopy measurements with fast sampling rates are achieved with a free-running dual-comb mode-locked semiconductor disk laser. The absolute stability of the optical comb modes is characterized both for free-running operation and with simple microwave stabilization. This approach drastically reduces the complexity for dual-comb spectroscopy. Band-gap engineering to tune the center wavelength from the ultraviolet to the mid-infrared could optimize frequency combs for specific gas targets, further enabling dual-comb spectroscopy for a wider range of industrial applications. Copyright © 2017, American Association for the Advancement of Science.
Suoranta, Sanna; Holli-Helenius, Kirsi; Koskenkorva, Päivi; Niskanen, Eini; Könönen, Mervi; Äikiä, Marja; Eskola, Hannu; Kälviäinen, Reetta; Vanninen, Ritva
2013-01-01
Progressive myoclonic epilepsy type 1 (EPM1) is an autosomal recessively inherited neurodegenerative disorder characterized by young onset age, myoclonus and tonic-clonic epileptic seizures. At the time of diagnosis, the visual assessment of the brain MRI is usually normal, with no major changes found later. Therefore, we utilized texture analysis (TA) to characterize and classify the underlying properties of the affected brain tissue by means of 3D texture features. Sixteen genetically verified patients with EPM1 and 16 healthy controls were included in the study. TA was performed upon 3D volumes of interest that were placed bilaterally in the thalamus, amygdala, hippocampus, caudate nucleus and putamen. Compared to the healthy controls, EPM1 patients had significant textural differences especially in the thalamus and right putamen. The most significantly differing texture features included parameters that measure the complexity and heterogeneity of the tissue, such as the co-occurrence matrix-based entropy and angular second moment, and also the run-length matrix-based parameters of gray-level non-uniformity, short run emphasis and long run emphasis. This study demonstrates the usability of 3D TA for extracting additional information from MR images. Textural alterations which suggest complex, coarse and heterogeneous appearance were found bilaterally in the thalamus, supporting the previous literature on thalamic pathology in EPM1. The observed putamenal involvement is a novel finding. Our results encourage further studies on the clinical applications, feasibility, reproducibility and reliability of 3D TA. PMID:23922849
Belke, Terry W; Christie-Fougere, Melissa M
2006-11-01
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less
Impact of water quality on chlorine demand of corroding copper.
Lytle, Darren A; Liggett, Jennifer
2016-04-01
Copper is widely used in drinking water premise plumbing system materials. In buildings such as hospitals, large and complicated plumbing networks make it difficult to maintain good water quality. Sustaining safe disinfectant residuals throughout a building to protect against waterborne pathogens such as Legionella is particularly challenging since copper and other reactive distribution system materials can exert considerable demands. The objective of this work was to evaluate the impact of pH and orthophosphate on the consumption of free chlorine associated with corroding copper pipes over time. A copper test-loop pilot system was used to control test conditions and systematically meet the study objectives. Chlorine consumption trends attributed to abiotic reactions with copper over time were different for each pH condition tested, and the total amount of chlorine consumed over the test runs increased with increasing pH. Orthophosphate eliminated chlorine consumption trends with elapsed time (i.e., chlorine demand was consistent across entire test runs). Orthophosphate also greatly reduced the total amount of chlorine consumed over the test runs. Interestingly, the total amount of chlorine consumed and the consumption rate were not pH dependent when orthophosphate was present. The findings reflect the complex and competing reactions at the copper pipe wall including corrosion, oxidation of Cu(I) minerals and ions, and possible oxidation of Cu(II) minerals, and the change in chlorine species all as a function of pH. The work has practical applications for maintaining chlorine residuals in premise plumbing drinking water systems including large buildings such as hospitals. Published by Elsevier Ltd.
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert
2016-01-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778
Nieman, David C; Capps, Courtney L; Capps, Christopher R; Shue, Zack L; McBride, Jennifer E
2018-05-03
This double-blind, randomized, placebo-controlled crossover trial determined if ingestion of a supplement containing a tomato complex with lycopene, phytoene, and phytofluene (T-LPP) and other compounds for 4 weeks would attenuate inflammation, muscle damage, and oxidative stress postexercise and during recovery from a 2-hr running bout that included 30 min of -10% downhill running. Study participants ingested the T-LPP supplement or placebo with the evening meal for 4 weeks prior to running 2 hr at high intensity. Blood samples and delayed onset muscle soreness ratings were taken pre- and post-4-week supplementation, and immediately following the 2-hr run, and then 1-hr, 24-hr, and 48-hr postrun. After a 2-week washout period, participants crossed over to the opposite treatment and repeated all procedures. Plasma lycopene, phytoene, and phytofluene increased significantly in T-LPP compared with placebo (p < .001 for each). Significant time effects were shown for serum creatine kinase, delayed onset muscle soreness, C-reactive protein, myoglobin, 9- and 13-hydroxyoctadecadienoic acids, ferric reducing ability of plasma, and six plasma cytokines (p < .001 for each). The pattern of increase for serum myoglobin differed between T-LPP and placebo (interaction effect, p = .016, with lower levels in T-LPP), but not for creatine kinase, delayed onset muscle soreness, C-reactive protein, the six cytokines, 9- and 13-hydroxyoctadecadienoic acids, and ferric reducing ability of plasma. No significant time or interaction effects were measured for plasma-oxidized low-density lipoprotein or serum 8-hydroxy-2'-deoxyguanosine. In summary, supplementation with T-LPP over a 4-week period increased plasma carotenoid levels 73% and attenuated postexercise increases in the muscle damage biomarker myoglobin, but not inflammation and oxidative stress.
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert
2016-04-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
MBE growth of highly reproducible VCSELs
NASA Astrophysics Data System (ADS)
Houng, Y. M.; Tan, M. R. T.
1997-05-01
Advances in the design of heterojunction devices have placed stringent demands on the epitaxial material technologies required to fabricate these structures. The increased demand for more stringent tolerance and complex device structures have resulted in a situation where acceptable growth yields will be realized only if epitaxial growth is directly monitored and controlled in real time. We report the growth of 980- and 850-nm vertical cavity surface emitting lasers (VCSEL's) by gas-source molecular beam epitaxy (GSMBE), in which the pyrometric interferometry technique is used for in situ monitoring and feedback control of layer thickness to obtain the highly reproducible distributed Bragg reflectors (DBR) for VCSEL structures. This technique uses an optical pyrometer to measure emissivity oscillations of the growing epi-layer surface. The growing layer thickness can then be related to the emissivity oscillation signals. When the layer reaches the desired thickness, the growth of the subsequent layer is initiated. By making layer thickness measurements and control in real-time throughout the entire growth cycle of the structure, the Fabry-Perot resonance at the desired wavelength is reproducibly obtained. The run-to-run variation of the Fabry-Perot wavelength of VCSEL structures is < ± 0.4%. Using this technique, the group III fluxes can also be calibrated and corrected for flux drifts, thus we are able to control the gain peak of the active region with a run-to-run variation of less than 0.3%. Surface emitting laser diodes were fabricated and operated CW at room temperature. CW threshold currents of 3 and 5 mA are measured at room temperature for 980- and 850-nm lasers, respectively. Output powers higher than 25 mW for 980-nm and 12 mW for 850-nm devices are obtained.
NASA Technical Reports Server (NTRS)
Zavordsky, Bradley; Case, Jonathan L.; Gotway, John H.; White, Kristopher; Medlin, Jeffrey; Wood, Lance; Radell, Dave
2014-01-01
Local modeling with a customized configuration is conducted at National Weather Service (NWS) Weather Forecast Offices (WFOs) to produce high-resolution numerical forecasts that can better simulate local weather phenomena and complement larger scale global and regional models. The advent of the Environmental Modeling System (EMS), which provides a pre-compiled version of the Weather Research and Forecasting (WRF) model and wrapper Perl scripts, has enabled forecasters to easily configure and execute the WRF model on local workstations. NWS WFOs often use EMS output to help in forecasting highly localized, mesoscale features such as convective initiation, the timing and inland extent of lake effect snow bands, lake and sea breezes, and topographically-modified winds. However, quantitatively evaluating model performance to determine errors and biases still proves to be one of the challenges in running a local model. Developed at the National Center for Atmospheric Research (NCAR), the Model Evaluation Tools (MET) verification software makes performing these types of quantitative analyses easier, but operational forecasters do not generally have time to familiarize themselves with navigating the sometimes complex configurations associated with the MET tools. To assist forecasters in running a subset of MET programs and capabilities, the Short-term Prediction Research and Transition (SPoRT) Center has developed and transitioned a set of dynamic, easily configurable Perl scripts to collaborating NWS WFOs. The objective of these scripts is to provide SPoRT collaborating partners in the NWS with the ability to evaluate the skill of their local EMS model runs in near real time with little prior knowledge of the MET package. The ultimate goal is to make these verification scripts available to the broader NWS community in a future version of the EMS software. This paper provides an overview of the SPoRT MET scripts, instructions for how the scripts are run, and example use cases.
Michalsik, L B; Aagaard, P; Madsen, K
2013-07-01
The purpose of this study was to determine the physical demands and match-induced impairments in physical performance in male elite Team Handball (TH) players in relation to playing position. Male elite TH field players were closely observed during 6 competitive seasons. Each player (wing players: WP, pivots: PV, backcourt players: BP) was evaluated during match-play using video recording and subsequently performing locomotion match analysis. A total distance of 3 627±568 m (group means±SD) was covered per match with a total effective playing time (TPT) of 53:51±5:52 min:s, while full-time players covered 3 945±538 m. The mean speed was 6.40±1.01 km · h - 1. High-intensity running constituted only 1.7±0.9% of TPT per match corresponding to 7.9±4.9% of the total distance covered. An average of 1 482.4±312.6 activity changes per player (n=82) with 53.2±14.1 high-intensity runs were observed per match. Total distance covered was greater in BP (3 765±532 m) and WP (3 641±501 m) than PV (3 295±495 m) (p<0.05), and WP performed more high-intensity running (10.9±5.7% of total distance covered) than PV (8.5±4.3%, p<0.05) and BP (6.2±3.2%, p<0.01). The amount of high-intensity running was lower (p<0.05) in the second (130.4±38.4 m) than in the first half (155.3±47.6 m) corresponding to a decrease of 16.2%.In conclusion, modern male elite TH is a complex team sport that comprises several types of movement categories, which during match-play place moderate-to-high demands on intermittent endurance running capacity and where the amount of high-intensity running may be high during brief periods of the match. Signs of fatigue-related changes were observed in terms of temporary impaired physical performance, since the amount of high-intensity running was reduced in the second half. Notably, physical demands differed between playing positions, with WP demonstrating a more intensive activity pattern than BP and PV, respectively. © Georg Thieme Verlag KG Stuttgart · New York.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Challenges in Visual Analysis of Ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skala, Vaclav
There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less
Multiuser receiver for DS-CDMA signals in multipath channels: an enhanced multisurface method.
Mahendra, Chetan; Puthusserypady, Sadasivan
2006-11-01
This paper deals with the problem of multiuser detection in direct-sequence code-division multiple-access (DS-CDMA) systems in multipath environments. The existing multiuser detectors can be divided into two categories: (1) low-complexity poor-performance linear detectors and (2) high-complexity good-performance nonlinear detectors. In particular, in channels where the orthogonality of the code sequences is destroyed by multipath, detectors with linear complexity perform much worse than the nonlinear detectors. In this paper, we propose an enhanced multisurface method (EMSM) for multiuser detection in multipath channels. EMSM is an intermediate piecewise linear detection scheme with a run-time complexity linear in the number of users. Its bit error rate performance is compared with existing linear detectors, a nonlinear radial basis function detector trained by the new support vector learning algorithm, and Verdu's optimal detector. Simulations in multipath channels, for both synchronous and asynchronous cases, indicate that it always outperforms all other linear detectors, performing nearly as well as nonlinear detectors.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Effect of Minimalist Footwear on Running Efficiency
Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.
2015-01-01
Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304
RTSPM: real-time Linux control software for scanning probe microscopy.
Chandrasekhar, V; Mehta, M M
2013-01-01
Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.
DTC commissioning. An arranged marriage.
Mooney, Helen
2004-04-22
The first privately run diagnostic and treatment centre required nine months of complex talks between the strategic health authority, the trust and Bupa. The contract cost more than if it had been run by the trust, but it increased access and transferred risk. One lesson was that staff should have been better acclimatised to the idea.
Rivers Run Through It: Discovering the Interior Columbia River Basin.
ERIC Educational Resources Information Center
Davis, Shelley; Wojtanik, Brenda Lincoln; Rieben, Elizabeth
1998-01-01
Explores the Columbia River Basin, its ecosystems, and challenges faced by natural resource managers. By studying the basin's complexity, students can learn about common scientific concepts such as the power of water and effects of rain shadows. Students can also explore social-scientific issues such as conflicts between protecting salmon runs and…
NASA Technical Reports Server (NTRS)
Benavente, Javier E.; Luce, Norris R.
1989-01-01
Demands for nonlinear time history simulations of large, flexible multibody dynamic systems has created a need for efficient interfaces between finite-element modeling programs and time-history simulations. One such interface, TREEFLX, an interface between NASTRAN and TREETOPS, a nonlinear dynamics and controls time history simulation for multibody structures, is presented and demonstrated via example using the proposed Space Station Mobile Remote Manipulator System (MRMS). The ability to run all three programs (NASTRAN, TREEFLX and TREETOPS), in addition to other programs used for controller design and model reduction (such as DMATLAB and TREESEL, both described), under a UNIX Workstation environment demonstrates the flexibility engineers now have in designing, developing and testing control systems for dynamically complex systems.
5K Run: 7-Week Training Schedule for Beginners
... This 5K training schedule incorporates a mix of running, walking and resting. This combination helps reduce the ... you'll gradually increase the amount of time running and reduce the amount of time walking. If ...
Using Modules with MPICH-G2 (and "Loose Ends")
NASA Technical Reports Server (NTRS)
Chang, Johnny; Thigpen, William W. (Technical Monitor)
2002-01-01
A new approach to running complex, distributed MPI jobs using the MPICH-G2 library is described. This approach allows the user to switch between different versions of compilers, system libraries, MPI libraries, etc. via the "module" command. The key idea is a departure from the prescribed "(jobtype=mpi)" approach to running distributed MPI jobs. The new method requires the user to provide a script that will be run as the "executable" with the "(jobtype=single)" RSL attribute. The major advantage of the proposed method is to enable users to decide in their own script what modules, environment, etc. they would like to have in running their job.
Effects of a minimalist shoe on running economy and 5-km running performance.
Fuller, Joel T; Thewlis, Dominic; Tsiros, Margarita D; Brown, Nicholas A T; Buckley, Jonathan D
2016-09-01
The purpose of this study was to determine if minimalist shoes improve time trial performance of trained distance runners and if changes in running economy, shoe mass, stride length, stride rate and footfall pattern were related to any difference in performance. Twenty-six trained runners performed three 6-min sub-maximal treadmill runs at 11, 13 and 15 km·h(-1) in minimalist and conventional shoes while running economy, stride length, stride rate and footfall pattern were assessed. They then performed a 5-km time trial. In the minimalist shoe, runners completed the trial in less time (effect size 0.20 ± 0.12), were more economical during sub-maximal running (effect size 0.33 ± 0.14) and decreased stride length (effect size 0.22 ± 0.10) and increased stride rate (effect size 0.22 ± 0.11). All but one runner ran with a rearfoot footfall in the minimalist shoe. Improvements in time trial performance were associated with improvements in running economy at 15 km·h(-1) (r = 0.58), with 79% of the improved economy accounted for by reduced shoe mass (P < 0.05). The results suggest that running in minimalist shoes improves running economy and 5-km running performance.
Sex-related differences in the wheel-running activity of mice decline with increasing age.
Bartling, Babett; Al-Robaiy, Samiya; Lehnich, Holger; Binder, Leonore; Hiebl, Bernhard; Simm, Andreas
2017-01-01
Laboratory mice of both sexes having free access to running wheels are commonly used to study mechanisms underlying the beneficial effects of physical exercise on health and aging in human. However, comparative wheel-running activity profiles of male and female mice for a long period of time in which increasing age plays an additional role are unknown. Therefore, we permanently recorded the wheel-running activity (i.e., total distance, median velocity, time of breaks) of female and male mice until 9months of age. Our records indicated higher wheel-running distances for females than males which were highest in 2-month-old mice. This was mainly reached by higher running velocities of the females and not by longer running times. However, the sex-related differences declined in parallel to the age-associated reduction in wheel-running activities. Female mice also showed more variances between the weekly running distances than males, which were recorded most often for females being 4-6months old but not older. Additional records of 24-month-old mice of both sexes indicated highly reduced wheel-running activities at old age. Surprisingly, this reduction at old age resulted mainly from lower running velocities and not from shorter running times. Old mice also differed in their course of night activity which peaked later compared to younger mice. In summary, we demonstrated the influence of sex on the age-dependent activity profile of mice which is somewhat contrasting to humans, and this has to be considered when transferring exercise-mediated mechanism from mouse to human. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.
2012-06-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at CERN is the infrastructure responsible for collecting and transferring ATLAS experimental data from detectors to the mass storage system. It relies on a large, distributed computing environment, including thousands of computing nodes with thousands of application running concurrently. In such a complex environment, information analysis is fundamental for controlling applications behavior, error reporting and operational monitoring. During data taking runs, streams of messages sent by applications via the message reporting system together with data published from applications via information services are the main sources of knowledge about correctness of running operations. The flow of data produced (with an average rate of O(1-10KHz)) is constantly monitored by experts to detect problem or misbehavior. This requires strong competence and experience in understanding and discovering problems and root causes, and often the meaningful information is not in the single message or update, but in the aggregated behavior in a certain time-line. The AAL project is meant at reducing the man power needs and at assuring a constant high quality of problem detection by automating most of the monitoring tasks and providing real-time correlation of data-taking and system metrics. This project combines technologies coming from different disciplines, in particular it leverages on an Event Driven Architecture to unify the flow of data from the ATLAS infrastructure, on a Complex Event Processing (CEP) engine for correlation of events and on a message oriented architecture for components integration. The project is composed of 2 main components: a core processing engine, responsible for correlation of events through expert-defined queries and a web based front-end to present real-time information and interact with the system. All components works in a loose-coupled event based architecture, with a message broker to centralize all communication between modules. The result is an intelligent system able to extract and compute relevant information from the flow of operational data to provide real-time feedback to human experts who can promptly react when needed. The paper presents the design and implementation of the AAL project, together with the results of its usage as automated monitoring assistant for the ATLAS data taking infrastructure.
An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.
ERIC Educational Resources Information Center
Gonzales, Michael G.
1984-01-01
Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)
Hoffman, J R
1997-07-01
The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.
Delin, G.N.; Landon, M.K.
2002-01-01
An experiment was conducted at a depressional (lowland) and an upland site in sandy soils to evaluate the effects of surface run-off on the transport of agricultural chemicals to ground water. Approximately 16.5 cm of water was applied to both sites during the experiment, representing a natural precipitation event with a recurrence interval of approximately 100 years. Run-off was quantified at the lowland site and was not detected at the upland site during the experiment. Run-off of water to the lowland site was the most important factor affecting differences in the concentrations and fluxes of the agricultural chemicals between the two sites. Run-off of water to the lowland site appears to have played a dual role by diluting chemical concentrations in the unsaturated zone as well as increasing the concentrations at the water table, compared to the upland site. Concentrations of chloride, nitrate and atrazine plus metabolites were noticeably greater at the water table than in the unsaturated zone at both sites. The estimated mass flux of chloride and nitrate to the water table during the test were 5-2 times greater, respectively, at the lowland site compared to the upland site, whereas the flux of sulfate and atrazine plus metabolites was slightly greater at the upland site. Results indicate that matrix flow of water and chemicals was the primary process causing the observed differences between the two sites. Results of the experiment illustrate the effects of heterogeneity and the complexity of evaluating chemical transport through the unsaturated zone. Copyright ?? 2002 Elsevier Science B.V.
Delin, Geoffrey N.; Landon, Matthew K.
2002-01-01
An experiment was conducted at a depressional (lowland) and an upland site in sandy soils to evaluate the effects of surface run-off on the transport of agricultural chemicals to ground water. Approximately 16.5 cm of water was applied to both sites during the experiment, representing a natural precipitation event with a recurrence interval of approximately 100 years. Run-off was quantified at the lowland site and was not detected at the upland site during the experiment. Run-off of water to the lowland site was the most important factor affecting differences in the concentrations and fluxes of the agricultural chemicals between the two sites. Run-off of water to the lowland site appears to have played a dual role by diluting chemical concentrations in the unsaturated zone as well as increasing the concentrations at the water table, compared to the upland site. Concentrations of chloride, nitrate and atrazine plus metabolites were noticeably greater at the water table than in the unsaturated zone at both sites. The estimated mass flux of chloride and nitrate to the water table during the test were 5–2 times greater, respectively, at the lowland site compared to the upland site, whereas the flux of sulfate and atrazine plus metabolites was slightly greater at the upland site. Results indicate that matrix flow of water and chemicals was the primary process causing the observed differences between the two sites. Results of the experiment illustrate the effects of heterogeneity and the complexity of evaluating chemical transport through the unsaturated zone.
Lossless medical image compression with a hybrid coder
NASA Astrophysics Data System (ADS)
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
Representation of Serendipitous Scientific Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A computer program defines and implements an innovative kind of data structure than can be used for representing information derived from serendipitous discoveries made via collection of scientific data on long exploratory spacecraft missions. Data structures capable of collecting any kind of data can easily be implemented in advance, but the task of designing a fixed and efficient data structure suitable for processing raw data into useful information and taking advantage of serendipitous scientific discovery is becoming increasingly difficult as missions go deeper into space. The present software eases the task by enabling definition of arbitrarily complex data structures that can adapt at run time as raw data are transformed into other types of information. This software runs on a variety of computers, and can be distributed in either source code or binary code form. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware. It has no specific memory requirements and depends upon the other software with which it is used. This program is implemented as a library that is called by, and becomes folded into, the other software with which it is used.
Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.
Bishop, Steven M; Ercole, Ari
2018-01-01
The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.
Hermassi, Souhail; Chelly, Mohamed-Souhaiel; Wollny, Rainer; Hoffmeyer, Birgit; Fieseler, Georg; Schulze, Stephan; Irlenbusch, Lars; Delank, Karl-Stefan; Shephard, Roy J; Bartels, Thomas; Schwesig, René
2018-06-01
This study assessed the validity of the handball-specific complex test (HBCT) and two non-specific field tests in professional elite handball athletes, using the match performance score (MPS) as the gold standard of performance. Thirteen elite male handball players (age: 27.4±4.8 years; premier German league) performed the HBCT, the Yo-Yo Intermittent Recovery (YYIR) test and a repeated shuttle sprint ability (RSA) test at the beginning of pre-season training. The RSA results were evaluated in terms of best time, total time, and fatigue decrement. Heart rates (HR) were assessed at selected times throughout all tests; the recovery HR was measured immediately post-test and 10 minutes later. The match performance score was based on various handball specific parameters (e.g., field goals, assists, steals, blocks, and technical mistakes) as seen during all matches of the immediately subsequent season (2015/2016). The parameters of run 1, run 2, and HR recovery at minutes 6 and 10 of the RSA test all showed a variance of more than 10% (range: 11-15%). However, the variance of scores for the YYIR test was much smaller (range: 1-7%). The resting HR (r2=0.18), HR recovery at minute 10 (r2=0.10), lactate concentration at rest (r2=0.17), recovery of heart rate from 0 to 10 minutes (r2=0.15), and velocity of second throw at first trial (r2=0.37) were the most valid HBCT parameters. Much effort is necessary to assess MPS and to develop valid tests. Speed and the rate of functional recovery seem the best predictors of competitive performance for elite handball players.
Increase in local protein concentration by field-inversion gel electrophoresis.
Tsai, Henghang; Low, Teck Yew; Freeby, Steve; Paulus, Aran; Ramnarayanan, Kalpana; Cheng, Chung-Pui Paul; Leung, Hon-Chiu Eastwood
2007-09-26
Proteins that migrate through cross-linked polyacrylamide gels (PAGs) under the influence of a constant electric field experience negative factors, such as diffusion and non-specific trapping in the gel matrix. These negative factors reduce protein concentrations within a defined gel volume with increasing migration distance and, therefore, decrease protein separation efficiency. Enhancement of protein separation efficiency was investigated by implementing pulsed field-inversion gel electrophoresis (FIGE). Separation of model protein species and large protein complexes was compared between FIGE and constant field electrophoresis (CFE) in different percentages of PAGs. Band intensities of proteins in FIGE with appropriate ratios of forward and backward pulse times were superior to CFE despite longer running times. These results revealed an increase in band intensity per defined gel volume. A biphasic protein relative mobility shift was observed in percentages of PAGs up to 14%. However, the effect of FIGE on protein separation was stochastic at higher PAG percentage. Rat liver lysates subjected to FIGE in the second-dimension separation of two-dimensional polyarcylamide gel electrophoresis (2D PAGE) showed a 20% increase in the number of discernible spots compared with CFE. Nine common spots from both FIGE and CFE were selected for peptide sequencing by mass spectrometry (MS), which revealed higher final ion scores of all nine protein spots from FIGE. Native protein complexes ranging from 800 kDa to larger than 2000 kDa became apparent using FIGE compared with CFE. The present investigation suggests that FIGE under appropriate conditions improves protein separation efficiency during PAGE as a result of increased local protein concentration. FIGE can be implemented with minimal additional instrumentation in any laboratory setting. Despite the tradeoff of longer running times, FIGE can be a powerful protein separation tool.
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith
2001-01-01
This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.
Damasceno, Mayara V.; Duarte, Marcos; Pasqua, Leonardo A.; Lima-Silva, Adriano E.; MacIntosh, Brian R.; Bertuzzi, Rômulo
2014-01-01
Purpose Previous studies report that static stretching (SS) impairs running economy. Assuming that pacing strategy relies on rate of energy use, this study aimed to determine whether SS would modify pacing strategy and performance in a 3-km running time-trial. Methods Eleven recreational distance runners performed a) a constant-speed running test without previous SS and a maximal incremental treadmill test; b) an anthropometric assessment and a constant-speed running test with previous SS; c) a 3-km time-trial familiarization on an outdoor 400-m track; d and e) two 3-km time-trials, one with SS (experimental situation) and another without (control situation) previous static stretching. The order of the sessions d and e were randomized in a counterbalanced fashion. Sit-and-reach and drop jump tests were performed before the 3-km running time-trial in the control situation and before and after stretching exercises in the SS. Running economy, stride parameters, and electromyographic activity (EMG) of vastus medialis (VM), biceps femoris (BF) and gastrocnemius medialis (GA) were measured during the constant-speed tests. Results The overall running time did not change with condition (SS 11:35±00:31 s; control 11:28±00:41 s, p = 0.304), but the first 100 m was completed at a significantly lower velocity after SS. Surprisingly, SS did not modify the running economy, but the iEMG for the BF (+22.6%, p = 0.031), stride duration (+2.1%, p = 0.053) and range of motion (+11.1%, p = 0.0001) were significantly modified. Drop jump height decreased following SS (−9.2%, p = 0.001). Conclusion Static stretch impaired neuromuscular function, resulting in a slow start during a 3-km running time-trial, thus demonstrating the fundamental role of the neuromuscular system in the self-selected speed during the initial phase of the race. PMID:24905918
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R
2012-02-01
The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.
Changes in foot and shank coupling due to alterations in foot strike pattern during running.
Pohl, Michael B; Buckley, John G
2008-03-01
Determining if and how the kinematic relationship between adjacent body segments changes when an individual's gait pattern is experimentally manipulated can yield insight into the robustness of the kinematic coupling across the associated joint(s). The aim of this study was to assess the effects on the kinematic coupling between the forefoot, rearfoot and shank during ground contact of running with alteration in foot strike pattern. Twelve subjects ran over-ground using three different foot strike patterns (heel strike, forefoot strike, toe running). Kinematic data were collected of the forefoot, rearfoot and shank, which were modelled as rigid segments. Coupling at the ankle-complex and midfoot joints was assessed using cross-correlation and vector coding techniques. In general good coupling was found between rearfoot frontal plane motion and transverse plane shank rotation regardless of foot strike pattern. Forefoot motion was also strongly coupled with rearfoot frontal plane motion. Subtle differences were noted in the amount of rearfoot eversion transferred into shank internal rotation in the first 10-15% of stance during heel strike running compared to forefoot and toe running, and this was accompanied by small alterations in forefoot kinematics. These findings indicate that during ground contact in running there is strong coupling between the rearfoot and shank via the action of the joints in the ankle-complex. In addition, there was good coupling of both sagittal and transverse plane forefoot with rearfoot frontal plane motion via the action of the midfoot joints.
The CMS Tier0 goes cloud and grid for LHC Run 2
Hufnagel, Dirk
2015-12-23
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less
The CMS TierO goes Cloud and Grid for LHC Run 2
NASA Astrophysics Data System (ADS)
Hufnagel, Dirk
2015-12-01
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.
Structator: fast index-based search for RNA sequence-structure patterns
2011-01-01
Background The secondary structure of RNA molecules is intimately related to their function and often more conserved than the sequence. Hence, the important task of searching databases for RNAs requires to match sequence-structure patterns. Unfortunately, current tools for this task have, in the best case, a running time that is only linear in the size of sequence databases. Furthermore, established index data structures for fast sequence matching, like suffix trees or arrays, cannot benefit from the complementarity constraints introduced by the secondary structure of RNAs. Results We present a novel method and readily applicable software for time efficient matching of RNA sequence-structure patterns in sequence databases. Our approach is based on affix arrays, a recently introduced index data structure, preprocessed from the target database. Affix arrays support bidirectional pattern search, which is required for efficiently handling the structural constraints of the pattern. Structural patterns like stem-loops can be matched inside out, such that the loop region is matched first and then the pairing bases on the boundaries are matched consecutively. This allows to exploit base pairing information for search space reduction and leads to an expected running time that is sublinear in the size of the sequence database. The incorporation of a new chaining approach in the search of RNA sequence-structure patterns enables the description of molecules folding into complex secondary structures with multiple ordered patterns. The chaining approach removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our method runs up to two orders of magnitude faster than previous methods. Conclusions The presented method's sublinear expected running time makes it well suited for RNA sequence-structure pattern matching in large sequence databases. RNA molecules containing several stem-loop substructures can be described by multiple sequence-structure patterns and their matches are efficiently handled by a novel chaining method. Beyond our algorithmic contributions, we provide with Structator a complete and robust open-source software solution for index-based search of RNA sequence-structure patterns. The Structator software is available at http://www.zbh.uni-hamburg.de/Structator. PMID:21619640
Drones--ethical considerations and medical implications.
Pepper, Tom
2012-01-01
Drones enhance military capability and form a potent element of force protection, allowing humans to be removed from hazardous environments and tedious jobs. However, there are moral, legal, and political dangers associated with their use. Although a time may come when it is possible to develop a drone that is able to autonomously and ethically engage a legitimate target with greater reliability than a human, until then military drones demand a crawl-walk-run development methodology, consent by military personnel for weapon use, and continued debate about the complex issues surrounding their deployment.
Financial Analysis of a Selected Company
NASA Astrophysics Data System (ADS)
Baran, Dušan; Pastýr, Andrej; Baranová, Daniela
2016-06-01
The success of every business enterprise is directly related to the competencies of business management. The business enterprise can, as a result, create variations of how to approach the new complex and changing situations of success in the market. Therefore managers are trying during negative times to change their management approach, to ensure long-term and stable running of the business enterprise. They are forced to continuously maintain and obtain customers and suppliers. By implementing these measures they have the opportunity to achieve a competitive advantage over other business enterprises.
NASA Astrophysics Data System (ADS)
Lindsay, R. A.; Cox, B. V.
Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.
S193: another non-eclipsing SW Sex star
NASA Astrophysics Data System (ADS)
Martínez-Pais, I. G.; Rodríguez-Gil, P.; Casares, J.
1999-05-01
We present time-resolved optical spectroscopy of the cataclysmic variable S193. The emission lines are remarkably similar to those of V795 Her and exhibit high-velocity S-waves and complex absorptions that are modulated with the orbital period. Evidence for transient anomalous spectral features is seen during the first two nights of our run. We propose that S193 and V795 Her are non-eclipsing SW Sex stars. Finally, we show that the `disc overflow' model fails to explain the Balmer line orbital behaviour in these low-inclination systems.
The neural correlates of morphological complexity processing: Detecting structure in pseudowords.
Schuster, Swetlana; Scharinger, Mathias; Brooks, Colin; Lahiri, Aditi; Hartwigsen, Gesa
2018-06-01
Morphological complexity is a highly debated issue in visual word recognition. Previous neuroimaging studies have shown that speakers are sensitive to degrees of morphological complexity. Two-step derived complex words (bridging through bridge N > bridge V > bridging) led to more enhanced activation in the left inferior frontal gyrus than their 1-step derived counterparts (running through run V > running). However, it remains unclear whether sensitivity to degrees of morphological complexity extends to pseudowords. If this were the case, it would indicate that abstract knowledge of morphological structure is independent of lexicality. We addressed this question by investigating the processing of two sets of pseudowords in German. Both sets contained morphologically viable two-step derived pseudowords differing in the number of derivational steps required to access an existing lexical representation and therefore the degree of structural analysis expected during processing. Using a 2 × 2 factorial design, we found lexicality effects to be distinct from processing signatures relating to structural analysis in pseudowords. Semantically-driven processes such as lexical search showed a more frontal distribution while combinatorial processes related to structural analysis engaged more parietal parts of the network. Specifically, more complex pseudowords showed increased activation in parietal regions (right superior parietal lobe and left precuneus) relative to pseudowords that required less structural analysis to arrive at an existing lexical representation. As the two sets were matched on cohort size and surface form, these results highlight the role of internal levels of morphological structure even in forms that do not possess a lexical representation. © 2018 Wiley Periodicals, Inc.
Comparison of Sprint and Run Times with Performance on the Wingate Anaerobic Test.
ERIC Educational Resources Information Center
Tharp, Gerald D.; And Others
1985-01-01
Male volunteers were studied to examine the relationship between the Wingate Anaerobic Test (WAnT) and sprint-run times and to determine the influence of age and weight. Results indicate the WAnT is a moderate predictor of dash and run times but becomes a stronger predictor when adjusted for body weight. (Author/MT)
12 CFR 1102.306 - Procedures for requesting records.
Code of Federal Regulations, 2011 CFR
2011-01-01
... section; (B) Where the running of such time is suspended for the calculation of a cost estimate for the... section; (C) Where the running of such time is suspended for the payment of fees pursuant to the paragraph... of the invoice. (ix) The time limit for the ASC to respond to a request will not begin to run until...
Barefoot running: an evaluation of current hypothesis, future research and clinical applications.
Tam, Nicholas; Astephen Wilson, Janie L; Noakes, Timothy D; Tucker, Ross
2014-03-01
Barefoot running has become a popular research topic, driven by the increasing prescription of barefoot running as a means of reducing injury risk. Proponents of barefoot running cite evolutionary theories that long-distance running ability was crucial for human survival, and proof of the benefits of natural running. Subsequently, runners have been advised to run barefoot as a treatment mode for injuries, strength and conditioning. The body of literature examining the mechanical, structural, clinical and performance implications of barefoot running is still in its infancy. Recent research has found significant differences associated with barefoot running relative to shod running, and these differences have been associated with factors that are thought to contribute to injury and performance. Crucially, long-term prospective studies have yet to be conducted and the link between barefoot running and injury or performance remains tenuous and speculative. The injury prevention potential of barefoot running is further complicated by the complexity of injury aetiology, with no single factor having been identified as causative for the most common running injuries. The aim of the present review was to critically evaluate the theory and evidence for barefoot running, drawing on both collected evidence as well as literature that have been used to argue in favour of barefoot running. We describe the factors driving the prescription of barefoot running, examine which of these factors may have merit, what the collected evidence suggests about the suitability of barefoot running for its purported uses and describe the necessary future research to confirm or refute the barefoot running hypotheses.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-05-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min(-1)). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key pointsDifferences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern.Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot.Stride frequency when barefoot was higher than when shod or in minimalist footwear.Contact time when shod was longer than when barefoot or in minimalist footwear.Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running.
McCallion, Ciara; Donne, Bernard; Fleming, Neil; Blanksby, Brian
2014-01-01
This study compared stride length, stride frequency, contact time, flight time and foot-strike patterns (FSP) when running barefoot, and in minimalist and conventional running shoes. Habitually shod male athletes (n = 14; age 25 ± 6 yr; competitive running experience 8 ± 3 yr) completed a randomised order of 6 by 4-min treadmill runs at velocities (V1 and V2) equivalent to 70 and 85% of best 5-km race time, in the three conditions. Synchronous recording of 3-D joint kinematics and ground reaction force data examined spatiotemporal variables and FSP. Most participants adopted a mid-foot strike pattern, regardless of condition. Heel-toe latency was less at V2 than V1 (-6 ± 20 vs. -1 ± 13 ms, p < 0.05), which indicated a velocity related shift towards a more FFS pattern. Stride duration and flight time, when shod and in minimalist footwear, were greater than barefoot (713 ± 48 and 701 ± 49 vs. 679 ± 56 ms, p < 0.001; and 502 ± 45 and 503 ± 41 vs. 488 ±4 9 ms, p < 0.05, respectively). Contact time was significantly longer when running shod than barefoot or in minimalist footwear (211±30 vs. 191 ± 29 ms and 198 ± 33 ms, p < 0.001). When running barefoot, stride frequency was significantly higher (p < 0.001) than in conventional and minimalist footwear (89 ± 7 vs. 85 ± 6 and 86 ± 6 strides·min-1). In conclusion, differences in spatiotemporal variables occurred within a single running session, irrespective of barefoot running experience, and, without a detectable change in FSP. Key points Differences in spatiotemporal variables occurred within a single running session, without a change in foot strike pattern. Stride duration and flight time were greater when shod and in minimalist footwear than when barefoot. Stride frequency when barefoot was higher than when shod or in minimalist footwear. Contact time when shod was longer than when barefoot or in minimalist footwear. Spatiotemporal variables when running in minimalist footwear more closely resemble shod than barefoot running. PMID:24790480
Kluitenberg, Bas; van der Worp, Henk; Huisstede, Bionka M A; Hartgens, Fred; Diercks, Ron; Verhagen, Evert; van Middelkoop, Marienke
2016-08-01
The incidence of running-related injuries is high. Some risk factors for injury were identified in novice runners, however, not much is known about the effect of training factors on injury risk. Therefore, the purpose of this study was to examine the associations between training factors and running-related injuries in novice runners, taking the time varying nature of these training-related factors into account. Prospective cohort study. 1696 participants completed weekly diaries on running exposure and injuries during a 6-week running program for novice runners. Total running volume (min), frequency and mean intensity (Rate of Perceived Exertion) were calculated for the seven days prior to each training session. The association of these time-varying variables with injury was determined in an extended Cox regression analysis. The results of the multivariable analysis showed that running with a higher intensity in the previous week was associated with a higher injury risk. Running frequency was not significantly associated with injury, however a trend towards running three times per week being more hazardous than two times could be observed. Finally, lower running volume was associated with a higher risk of sustaining an injury. These results suggest that running more than 60min at a lower intensity is least injurious. This finding is contrary to our expectations and is presumably the result of other factors. Therefore, the findings should not be used plainly as a guideline for novices. More research is needed to establish the person-specific training patterns that are associated with injury. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Long, Leroy L; Srinivasan, Manoj
2013-04-06
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.
A meteorological distribution system for high-resolution terrestrial modeling (MicroMet)
Glen E. Liston; Kelly Elder
2006-01-01
An intermediate-complexity, quasi-physically based, meteorological model (MicroMet) has been developed to produce high-resolution (e.g., 30-m to 1-km horizontal grid increment) atmospheric forcings required to run spatially distributed terrestrial models over a wide variety of landscapes. The following eight variables, required to run most terrestrial models, are...
Banca, Paula; Sousa, Teresa; Duarte, Isabel Catarina; Castelo-Branco, Miguel
2015-12-01
Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.
NASA Astrophysics Data System (ADS)
Banca, Paula; Sousa, Teresa; Catarina Duarte, Isabel; Castelo-Branco, Miguel
2015-12-01
Objective. Current approaches in neurofeedback/brain-computer interface research often focus on identifying, on a subject-by-subject basis, the neural regions that are best suited for self-driven modulation. It is known that the hMT+/V5 complex, an early visual cortical region, is recruited during explicit and implicit motion imagery, in addition to real motion perception. This study tests the feasibility of training healthy volunteers to regulate the level of activation in their hMT+/V5 complex using real-time fMRI neurofeedback and visual motion imagery strategies. Approach. We functionally localized the hMT+/V5 complex to further use as a target region for neurofeedback. An uniform strategy based on motion imagery was used to guide subjects to neuromodulate hMT+/V5. Main results. We found that 15/20 participants achieved successful neurofeedback. This modulation led to the recruitment of a specific network as further assessed by psychophysiological interaction analysis. This specific circuit, including hMT+/V5, putative V6 and medial cerebellum was activated for successful neurofeedback runs. The putamen and anterior insula were recruited for both successful and non-successful runs. Significance. Our findings indicate that hMT+/V5 is a region that can be modulated by focused imagery and that a specific cortico-cerebellar circuit is recruited during visual motion imagery leading to successful neurofeedback. These findings contribute to the debate on the relative potential of extrinsic (sensory) versus intrinsic (default-mode) brain regions in the clinical application of neurofeedback paradigms. This novel circuit might be a good target for future neurofeedback approaches that aim, for example, the training of focused attention in disorders such as ADHD.
Švarc-Gajić, Jaroslava; Clavijo, Sabrina; Suárez, Ruth; Cvetanović, Aleksandra; Cerdà, Víctor
2018-03-01
Cherry stems have been used in traditional medicine mostly for the treatment of urinary tract infections. Extraction with subcritical water, according to its selectivity, efficiency and other aspects, differs substantially from conventional extraction techniques. The complexity of plant subcritical water extracts is due to the ability of subcritical water to extract different chemical classes of different physico-chemical properties and polarities in a single run. In this paper, dispersive liquid-liquid microextraction (DLLME) with simultaneous derivatisation was optimised for the analysis of complex subcritical water extracts of cherry stems to allow simple and rapid preparation prior to gas chromatography-mass spectrometry (GC-MS). After defining optimal extracting and dispersive solvents, the optimised method was used for the identification of compounds belonging to different chemical classes in a single analytical run. The developed sample preparation protocol enabled simultaneous extraction and derivatisation, as well as convenient coupling with GC-MS analysis, reducing the analysis time and number of steps. The applied analytical protocol allowed simple and rapid chemical screening of subcritical water extracts and was used for the comparison of subcritical water extracts of sweet and sour cherry stems. Graphical abstract DLLME GC MS analysis of cherry stem extracts obtained by subcritical water.
How to reduce long-term drift in present-day and deep-time simulations?
NASA Astrophysics Data System (ADS)
Brunetti, Maura; Vérard, Christian
2018-06-01
Climate models are often affected by long-term drift that is revealed by the evolution of global variables such as the ocean temperature or the surface air temperature. This spurious trend reduces the fidelity to initial conditions and has a great influence on the equilibrium climate after long simulation times. Useful insight on the nature of the climate drift can be obtained using two global metrics, i.e. the energy imbalance at the top of the atmosphere and at the ocean surface. The former is an indicator of the limitations within a given climate model, at the level of both numerical implementation and physical parameterisations, while the latter is an indicator of the goodness of the tuning procedure. Using the MIT general circulation model, we construct different configurations with various degree of complexity (i.e. different parameterisations for the bulk cloud albedo, inclusion or not of friction heating, different bathymetry configurations) to which we apply the same tuning procedure in order to obtain control runs for fixed external forcing where the climate drift is minimised. We find that the interplay between tuning procedure and different configurations of the same climate model provides crucial information on the stability of the control runs and on the goodness of a given parameterisation. This approach is particularly relevant for constructing good-quality control runs of the geological past where huge uncertainties are found in both initial and boundary conditions. We will focus on robust results that can be generally applied to other climate models.
A chest-shape target automatic detection method based on Deformable Part Models
NASA Astrophysics Data System (ADS)
Zhang, Mo; Jin, Weiqi; Li, Li
2016-10-01
Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.
Simulations of isoprene: Ozone reactions for a general circulation/chemical transport model
NASA Technical Reports Server (NTRS)
Makar, P. A.; Mcconnell, J. C.
1994-01-01
A parameterized reaction mechanism has been created to examine the interactions between isoprene and other tropospheric gas-phase chemicals. Tests of the parameterization have shown that its results match those of a more complex reaction set to a high degree of accuracy. Comparisons between test runs have shown that the presence of isoprene at the start of a six day interval can enhance later ozone concentrations by as much as twenty-nine percent. The test cases used no input fluxes beyond the initial time, implying that a single input of a biogenic hydrocarbon to an airmass can alter its ozone chemistry over a time scale on the order of a week.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
Design and implementation of a software package to control a network of robotic observatories
NASA Astrophysics Data System (ADS)
Tuparev, G.; Nicolova, I.; Zlatanov, B.; Mihova, D.; Popova, I.; Hessman, F. V.
2006-09-01
We present a description of a reusable software package able to control a large, heterogeneous network of fully and semi-robotic observatories initially developed to run the MONET network of two 1.2 m telescopes. Special attention is given to the design of a robust, long-term observation scheduler which also allows the trading of observation time and facilities within various networks. The handling of the ``Phase I&II" project-development process, the time-accounting between complex organizational structures, and usability issues for making the package accessible not only to professional astronomers, but also to amateurs and high-school students is discussed. A simple RTML-based solution to link multiple networks is demonstrated.
Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen
2012-01-01
Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε̄ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε̄ describes the accelerating relationships between the damage development and running time. However, the index ε̄ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε̄ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly. PMID:23112591
Shen, Zhongjie; He, Zhengjia; Chen, Xuefeng; Sun, Chuang; Liu, Zhiwen
2012-01-01
Performance degradation assessment based on condition monitoring plays an important role in ensuring reliable operation of equipment, reducing production downtime and saving maintenance costs, yet performance degradation has strong fuzziness, and the dynamic information is random and fuzzy, making it a challenge how to assess the fuzzy bearing performance degradation. This study proposes a monotonic degradation assessment index of rolling bearings using fuzzy support vector data description (FSVDD) and running time. FSVDD constructs the fuzzy-monitoring coefficient ε⁻ which is sensitive to the initial defect and stably increases as faults develop. Moreover, the parameter ε⁻ describes the accelerating relationships between the damage development and running time. However, the index ε⁻ with an oscillating trend disagrees with the irreversible damage development. The running time is introduced to form a monotonic index, namely damage severity index (DSI). DSI inherits all advantages of ε⁻ and overcomes its disadvantage. A run-to-failure test is carried out to validate the performance of the proposed method. The results show that DSI reflects the growth of the damages with running time perfectly.
Noack, Marko; Partzsch, Johannes; Mayr, Christian G; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene
2015-01-01
Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm(2) and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.
Robust algorithm for aligning two-dimensional chromatograms.
Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel
2012-11-06
Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass August 14, 2012... Division (``MBSD'') runs its first processing pass of the day from 2 p.m. to 4 p.m. Eastern Standard Time... MBSD intends to move the time at which it runs its first processing pass of the day (historically...
Towards Run-time Assurance of Advanced Propulsion Algorithms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy
2014-01-01
This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.
Pereira, Vanessa Helena; Gama, Maria Carolina Traina; Sousa, Filipe Antônio Barros; Lewis, Theodore Gyle; Gobatto, Claudio Alexandre; Manchado - Gobatto, Fúlvia Barros
2015-01-01
The aims of the present study were analyze the fatigue process at distinct intensity efforts and to investigate its occurrence as interactions at distinct body changes during exercise, using complex network models. For this, participants were submitted to four different running intensities until exhaustion, accomplished in a non-motorized treadmill using a tethered system. The intensities were selected according to critical power model. Mechanical (force, peak power, mean power, velocity and work) and physiological related parameters (heart rate, blood lactate, time until peak blood lactate concentration (lactate time), lean mass, anaerobic and aerobic capacities) and IPAQ score were obtained during exercises and it was used to construction of four complex network models. Such models have both, theoretical and mathematical value, and enables us to perceive new insights that go beyond conventional analysis. From these, we ranked the influences of each node at the fatigue process. Our results shows that nodes, links and network metrics are sensibility according to increase of efforts intensities, been the velocity a key factor to exercise maintenance at models/intensities 1 and 2 (higher time efforts) and force and power at models 3 and 4, highlighting mechanical variables in the exhaustion occurrence and even training prescription applications. PMID:25994386
NASA Astrophysics Data System (ADS)
Pereira, Vanessa Helena; Gama, Maria Carolina Traina; Sousa, Filipe Antônio Barros; Lewis, Theodore Gyle; Gobatto, Claudio Alexandre; Manchado-Gobatto, Fúlvia Barros
2015-05-01
The aims of the present study were analyze the fatigue process at distinct intensity efforts and to investigate its occurrence as interactions at distinct body changes during exercise, using complex network models. For this, participants were submitted to four different running intensities until exhaustion, accomplished in a non-motorized treadmill using a tethered system. The intensities were selected according to critical power model. Mechanical (force, peak power, mean power, velocity and work) and physiological related parameters (heart rate, blood lactate, time until peak blood lactate concentration (lactate time), lean mass, anaerobic and aerobic capacities) and IPAQ score were obtained during exercises and it was used to construction of four complex network models. Such models have both, theoretical and mathematical value, and enables us to perceive new insights that go beyond conventional analysis. From these, we ranked the influences of each node at the fatigue process. Our results shows that nodes, links and network metrics are sensibility according to increase of efforts intensities, been the velocity a key factor to exercise maintenance at models/intensities 1 and 2 (higher time efforts) and force and power at models 3 and 4, highlighting mechanical variables in the exhaustion occurrence and even training prescription applications.
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Discrete Fourier Transform Analysis in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2009-01-01
Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.
Using Block-local Atomicity to Detect Stale-value Concurrency Errors
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus; Biere, Armin
2004-01-01
Data races do not cover all kinds of concurrency errors. This paper presents a data-flow-based technique to find stale-value errors, which are not found by low-level and high-level data race algorithms. Stale values denote copies of shared data where the copy is no longer synchronized. The algorithm to detect such values works as a consistency check that does not require any assumptions or annotations of the program. It has been implemented as a static analysis in JNuke. The analysis is sound and requires only a single execution trace if implemented as a run-time checking algorithm. Being based on an analysis of Java bytecode, it encompasses the full program semantics, including arbitrarily complex expressions. Related techniques are more complex and more prone to over-reporting.
Advanced Software V&V for Civil Aviation and Autonomy
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.
2017-01-01
With the advances in high-computing platform (e.g., advanced graphical processing units or multi-core processors), computationally-intensive software techniques such as the ones used in artificial intelligence or formal methods have provided us with an opportunity to further increase safety in the aviation industry. Some of these techniques have facilitated building safety at design time, like in aircraft engines or software verification and validation, and others can introduce safety benefits during operations as long as we adapt our processes. In this talk, I will present how NASA is taking advantage of these new software techniques to build in safety at design time through advanced software verification and validation, which can be applied earlier and earlier in the design life cycle and thus help also reduce the cost of aviation assurance. I will then show how run-time techniques (such as runtime assurance or data analytics) offer us a chance to catch even more complex problems, even in the face of changing and unpredictable environments. These new techniques will be extremely useful as our aviation systems become more complex and more autonomous.
Code of Federal Regulations, 2011 CFR
2011-07-01
....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1990-01-01
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.
Traverse, Charles C.
2017-01-01
ABSTRACT Advances in sequencing technologies have enabled direct quantification of genome-wide errors that occur during RNA transcription. These errors occur at rates that are orders of magnitude higher than rates during DNA replication, but due to technical difficulties such measurements have been limited to single-base substitutions and have not yet quantified the scope of transcription insertions and deletions. Previous reporter gene assay findings suggested that transcription indels are produced exclusively by elongation complex slippage at homopolymeric runs, so we enumerated indels across the protein-coding transcriptomes of Escherichia coli and Buchnera aphidicola, which differ widely in their genomic base compositions and incidence of repeat regions. As anticipated from prior assays, transcription insertions prevailed in homopolymeric runs of A and T; however, transcription deletions arose in much more complex sequences and were rarely associated with homopolymeric runs. By reconstructing the relocated positions of the elongation complex as inferred from the sequences inserted or deleted during transcription, we show that continuation of transcription after slippage hinges on the degree of nucleotide complementarity within the RNA:DNA hybrid at the new DNA template location. PMID:28851848
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.
Using Web 2.0 Techniques To Bring Global Climate Modeling To More Users
NASA Astrophysics Data System (ADS)
Chandler, M. A.; Sohl, L. E.; Tortorici, S.
2012-12-01
The Educational Global Climate Model has been used for many years in undergraduate courses and professional development settings to teach the fundamentals of global climate modeling and climate change simulation to students and teachers. While course participants have reported a high level of satisfaction in these courses and overwhelmingly claim that EdGCM projects are worth the effort, there is often a high level of frustration during the initial learning stages. Many of the problems stem from issues related to installation of the software suite and to the length of time it can take to run initial experiments. Two or more days of continuous run time may be required before enough data has been gathered to begin analyses. Asking users to download existing simulation data has not been a solution because the GCM data sets are several gigabytes in size, requiring substantial bandwidth and stable dedicated internet connections. As a means of getting around these problems we have been developing a Web 2.0 utility called EzGCM (Easy G-G-M) which emphasizes that participants learn the steps involved in climate modeling research: constructing a hypothesis, designing an experiment, running a computer model and assessing when an experiment has finished (reached equilibrium), using scientific visualization to support analysis, and finally communicating the results through social networking methods. We use classic climate experiments that can be "rediscovered" through exercises with EzGCM and are attempting to make this Web 2.0 tool an entry point into climate modeling for teachers with little time to cover the subject, users with limited computer skills, and for those who want an introduction to the process before tackling more complex projects with EdGCM.
Warfarin genotyping in a single PCR reaction for microchip electrophoresis.
Poe, Brian L; Haverstick, Doris M; Landers, James P
2012-04-01
Warfarin is the most commonly prescribed oral anticoagulant medication but also is the second leading cause of emergency room visits for adverse drug reactions. Genetic testing for warfarin sensitivity may reduce hospitalization rates, but prospective genotyping is impeded in part by the turnaround time and costs of genotyping. Microfluidics-based assays can reduce reagent consumption and analysis time; however, no current assay has integrated multiplexed allele-specific PCR for warfarin genotyping with electrophoretic microfluidics hardware. Ideally, such an assay would use a single PCR reaction and, without further processing, a single microchip electrophoresis (ME) run to determine the 3 single-nucleotide polymorphisms (SNPs) affecting warfarin sensitivity [i.e., CYP2C9 (cytochrome P450, family 2, subfamily C, polypeptide 9) *2, CYP2C9 *3, and the VKORC1 (vitamin K epoxide reductase complex 1) A/B haplotype]. We designed and optimized primers for a fully multiplexed assay to examine 3 biallelic SNPs with the tetraprimer amplification refractory mutation system (T-ARMS). The assay was developed with conventional PCR equipment and demonstrated for microfluidic infrared-mediated PCR. Genotypes were determined by ME on the basis of the pattern of PCR products. Thirty-five samples of human genomic DNA were analyzed with this multiplex T-ARMS assay, and 100% of the genotype determinations agreed with the results obtained by other validated methods. The sample population included several genotypes conferring warfarin sensitivity, with both homozygous and heterozygous genotypes for each SNP. Total analysis times for the PCR and ME were approximately 75 min (1-sample run) and 90 min (12-sample run). This multiplexed T-ARMS assay coupled with microfluidics hardware constitutes a promising avenue for an inexpensive and rapid platform for warfarin genotyping.
Agricultural Airplane Mission Time Structure Characteristics
NASA Technical Reports Server (NTRS)
Jewel, J. W., Jr.
1982-01-01
The time structure characteristics of agricultural airplane missions were studied by using records from NASA VGH flight recorders. Flight times varied from less than 3 minutes to more than 103 minutes. There was a significant reduction in turning time between spreading runs as pilot experience in the airplane type increased. Spreading runs accounted for only 25 to 29 percent of the flight time of an agricultural airplane. Lowering the longitudinal stick force appeared to reduce both the turning time between spreading runs and pilot fatigue at the end of a working day.
Fluidica CFD software for fluids instruction
NASA Astrophysics Data System (ADS)
Colonius, Tim
2008-11-01
Fluidica is an open-source freely available Matlab graphical user interface (GUI) to to an immersed-boundary Navier- Stokes solver. The algorithm is programmed in Fortran and compiled into Matlab as mex-function. The user can create external flows about arbitrarily complex bodies and collections of free vortices. The code runs fast enough for complex 2D flows to be computed and visualized in real-time on the screen. This facilitates its use in homework and in the classroom for demonstrations of various potential-flow and viscous flow phenomena. The GUI has been written with the goal of allowing the student to learn how to use the software as she goes along. The user can select which quantities are viewed on the screen, including contours of various scalars, velocity vectors, streamlines, particle trajectories, streaklines, and finite-time Lyapunov exponents. In this talk, we demonstrate the software in the context of worked classroom examples demonstrating lift and drag, starting vortices, separation, and vortex dynamics.
NASA Astrophysics Data System (ADS)
Baran, Talat
2017-08-01
In this study, a new heterogeneous palladium (II) catalyst that contains O-carboxymethyl chitosan Schiff base has been designed for Suzuki coupling reactions. The chemical structures of the synthesized catalyst were characterized with the FTIR, TG/DTG, ICP-OES, SEM/EDAX, 1H NMR, 13C NMR, GC/MS, XRD, and magnetic moment techniques. The reusability and catalytic behavior of heterogeneous catalyst was tested towards Suzuki reactions. As a result of the tests, excellent selectivity was obtained, and by-products of homo coupling were not seen in the spectra. The biaryls products were identified on a GC/MS. In addition, it was determined in the reusability tests that the catalysts could be used several times (seven runs). More importantly, with very low catalyst loading (6 × 10-3 mol %) in very short reaction time (5 min), chitosan Schiff base supported Pd(II) complex gave high TON and TOF values. These findings showed that Schiff base supported Pd(II) catalyst is suitable for Suzuki cross coupling reactions.
GEOTAIL Spacecraft historical data report
NASA Technical Reports Server (NTRS)
Boersig, George R.; Kruse, Lawrence F.
1993-01-01
The purpose of this GEOTAIL Historical Report is to document ground processing operations information gathered on the GEOTAIL mission during processing activities at the Cape Canaveral Air Force Station (CCAFS). It is hoped that this report may aid management analysis, improve integration processing and forecasting of processing trends, and reduce real-time schedule changes. The GEOTAIL payload is the third Delta 2 Expendable Launch Vehicle (ELV) mission to document historical data. Comparisons of planned versus as-run schedule information are displayed. Information will generally fall into the following categories: (1) payload stay times (payload processing facility/hazardous processing facility/launch complex-17A); (2) payload processing times (planned, actual); (3) schedule delays; (4) integrated test times (experiments/launch vehicle); (5) unique customer support requirements; (6) modifications performed at facilities; (7) other appropriate information (Appendices A & B); and (8) lessons learned (reference Appendix C).
The Effects of Differential Goal Weights on the Performance of a Complex Financial Task.
ERIC Educational Resources Information Center
Edmister, Robert O.; Locke, Edwin A.
1987-01-01
Determined whether people could obtain outcomes on a complex task that would be in line with differential goal weights corresponding to different aspects of the task. Bank lending officers were run through lender-simulation exercises. Five performance goals were weighted. Demonstrated effectiveness of goal setting with complex tasks, using group…
NASA Technical Reports Server (NTRS)
1997-01-01
Real-Time Innovations, Inc. (RTI) collaborated with Ames Research Center, the Jet Propulsion Laboratory and Stanford University to leverage NASA research to produce ControlShell software. RTI is the first "graduate" of Ames Research Center's Technology Commercialization Center. The ControlShell system was used extensively on a cooperative project to enhance the capabilities of a Russian-built Marsokhod rover being evaluated for eventual flight to Mars. RTI's ControlShell is complex, real-time command and control software, capable of processing information and controlling mechanical devices. One ControlShell tool is StethoScope. As a real-time data collection and display tool, StethoScope allows a user to see how a program is running without changing its execution. RTI has successfully applied its software savvy in other arenas, such as telecommunications, networking, video editing, semiconductor manufacturing, automobile systems, and medical imaging.
Zou, Dan; Liu, Peng; Chen, Ka; Xie, Qi; Liang, Xinyu; Bai, Qian; Zhou, Qicheng; Liu, Kai; Zhang, Ting; Zhu, Jundong; Mi, Mantian
2015-01-01
Purpose Exercise tolerance is impaired in hypoxia. The aim of this study was to evaluate the effects of myricetin, a dietary flavonoid compound widely found in fruits and vegetables, on acute hypoxia-induced exercise intolerance in vivo and in vitro. Methods Male rats were administered myricetin or vehicle for 7 days and subsequently spent 24 hours at a barometric pressure equivalent to 5000 m. Exercise capacity was then assessed through the run-to-fatigue procedure, and mitochondrial morphology in skeletal muscle cells was observed by transmission electron microscopy (TEM). The enzymatic activities of electron transfer complexes were analyzed using an enzyme-linked immuno-sorbent assay (ELISA). mtDNA was quantified by real-time-PCR. Mitochondrial membrane potential was measured by JC-1 staining. Protein expression was detected through western blotting, immunohistochemistry, and immunofluorescence. Results Myricetin supplementation significantly prevented the decline of run-to-fatigue time of rats in hypoxia, and attenuated acute hypoxia-induced mitochondrial impairment in skeletal muscle cells in vivo and in vitro by maintaining mitochondrial structure, mtDNA content, mitochondrial membrane potential, and activities of the respiratory chain complexes. Further studies showed that myricetin maintained mitochondrial biogenesis in skeletal muscle cells under hypoxic conditions by up-regulating the expressions of mitochondrial biogenesis-related regluators, in addition, AMP-activated protein kinase(AMPK) plays a crucial role in this process. Conclusions Myricetin may have important applications for improving physical performance under hypoxic environment, which may be attributed to the protective effect against mitochondrial impairment by maintaining mitochondrial biogenesis. PMID:25919288
Simple method to verify OPC data based on exposure condition
NASA Astrophysics Data System (ADS)
Moon, James; Ahn, Young-Bae; Oh, Sey-Young; Nam, Byung-Ho; Yim, Dong Gyu
2006-03-01
In a world where Sub100nm lithography tool is an everyday household item for device makers, shrinkage of the device is at a rate that no one ever have imagined. With the shrinkage of device at such a high rate, demand placed on Optical Proximity Correction (OPC) is like never before. To meet this demand with respect to shrinkage rate of the device, more aggressive OPC tactic is involved. Aggressive OPC tactics is a must for sub 100nm lithography tech but this tactic eventually results in greater room for OPC error and complexity of the OPC data. Until now, Optical Rule Check (ORC) or Design Rule Check (DRC) was used to verify this complex OPC error. But each of these methods has its pros and cons. ORC verification of OPC data is rather accurate "process" wise but inspection of full chip device requires a lot of money (Computer , software,..) and patience (run time). DRC however has no such disadvantage, but accuracy of the verification is a total downfall "process" wise. In this study, we were able to create a new method for OPC data verification that combines the best of both ORC and DRC verification method. We created a method that inspects the biasing of the OPC data with respect to the illumination condition of the process that's involved. This new method for verification was applied to 80nm tech ISOLATION and GATE layer of the 512M DRAM device and showed accuracy equivalent to ORC inspection with run time that of DRC verification.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... To Move the Time at Which It Runs Its Daily Morning Pass March 8, 2011. Pursuant to Section 19(b)(1... Backed Securities Division (``MBSD'') intends to move the time at which it runs its daily morning pass... notify participants that MBSD intends to move the time at which it runs its daily morning pass from 10:30...
Mechanics and energetics of human locomotion on sand.
Lejeune, T M; Willems, P A; Heglund, N C
1998-07-01
Moving about in nature often involves walking or running on a soft yielding substratum such as sand, which has a profound effect on the mechanics and energetics of locomotion. Force platform and cinematographic analyses were used to determine the mechanical work performed by human subjects during walking and running on sand and on a hard surface. Oxygen consumption was used to determine the energetic cost of walking and running under the same conditions. Walking on sand requires 1.6-2.5 times more mechanical work than does walking on a hard surface at the same speed. In contrast, running on sand requires only 1.15 times more mechanical work than does running on a hard surface at the same speed. Walking on sand requires 2.1-2.7 times more energy expenditure than does walking on a hard surface at the same speed; while running on sand requires 1.6 times more energy expenditure than does running on a hard surface. The increase in energy cost is due primarily to two effects: the mechanical work done on the sand, and a decrease in the efficiency of positive work done by the muscles and tendons.
Particle Choices and Collocation in Cameroon English Phrasal Verbs
ERIC Educational Resources Information Center
Epoge, Napoleon
2016-01-01
The meaning of some phrasal verbs can be guessed from the meanings of the parts (to sit down = sit + down, run after = run + after) and the meaning of some others have to be learned (to put up (a visitor) = accommodate, to hold up = cause delay or try to rob someone) due to their syntactic and semantic complexities. In this regard, the syntactic…
1. "X15 RUN UP AREA 230." A somewhat blurred, very ...
1. "X-15 RUN UP AREA 230." A somewhat blurred, very low altitude low oblique view to the northwest. This view predates construction of observation bunkers. Photo no. "14,696 58 A-AFFTC 17 NOV 58." - Edwards Air Force Base, X-15 Engine Test Complex, Rogers Dry Lake, east of runway between North Base & South Base, Boron, Kern County, CA
CMS Readiness for Multi-Core Workload Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less
CMS readiness for multi-core workload scheduling
NASA Astrophysics Data System (ADS)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.
2017-10-01
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.
Isocapnic hyperpnea training improves performance in competitive male runners.
Leddy, John J; Limprasertkul, Atcharaporn; Patel, Snehal; Modlich, Frank; Buyea, Cathy; Pendergast, David R; Lundgren, Claes E G
2007-04-01
The effects of voluntary isocapnic hyperpnea (VIH) training (10 h over 4 weeks, 30 min/day) on ventilatory system and running performance were studied in 15 male competitive runners, 8 of whom trained twice weekly for 3 more months. Control subjects (n = 7) performed sham-VIH. Vital capacity (VC), FEV1, maximum voluntary ventilation (MVV), maximal inspiratory and expiratory mouth pressures, VO2max, 4-mile run time, treadmill run time to exhaustion at 80% VO2max, serum lactate, total ventilation (V(E)), oxygen consumption (VO2) oxygen saturation and cardiac output were measured before and after 4 weeks of VIH. Respiratory parameters and 4-mile run time were measured monthly during the 3-month maintenance period. There were no significant changes in post-VIH VC and FEV1 but MVV improved significantly (+10%). Maximal inspiratory and expiratory mouth pressures, arterial oxygen saturation and cardiac output did not change post-VIH. Respiratory and running performances were better 7- versus 1 day after VIH. Seven days post-VIH, respiratory endurance (+208%) and treadmill run time (+50%) increased significantly accompanied by significant reductions in respiratory frequency (-6%), V(E) (-7%), VO2 (-6%) and lactate (-18%) during the treadmill run. Post-VIH 4-mile run time did not improve in the control group whereas it improved in the experimental group (-4%) and remained improved over a 3 month period of reduced VIH frequency. The improvements cannot be ascribed to improved blood oxygen delivery to muscle or to psychological factors.
Memory for Negation in Coordinate and Complex Sentences
ERIC Educational Resources Information Center
Harris, Richard J.
1976-01-01
Two experiments were run to test memory for the negation morpheme "not" in coordinate sentences (e.g., The ballerina had twins and the policewoman did not have triplets) and complex sentences (e.g., The ghost scared Hamlet into not murdering Shakespeare). (Editor)
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Hsu, Andrew R; Lareau, Craig R; Anderson, Robert B
2015-11-01
Infolding and retraction of an avulsed deltoid complex after ankle fracture can be a source of persistent increased medial clear space, malreduction, and postoperative pain and medial instability. The purpose of this descriptive case series was to analyze the preliminary outcomes of acute superficial deltoid complex avulsion repair during ankle fracture fixation in a cohort of National Football League (NFL) players. We found that there is often complete avulsion of the superficial deltoid complex off the proximal aspect of the medial malleolus during high-energy ankle fractures in athletes. Between 2004 and 2014, the cases of 14 NFL players who underwent ankle fracture fixation with open deltoid complex repair were reviewed. Patients with chronic deltoid ligament injuries or ankle fractures more than 2 months old were excluded. Average age for all patients was 25 years and body mass index 34.4. Player positions included 1 wide receiver, 1 tight end, 1 safety, 1 running back, 1 linebacker, and 9 offensive linemen. Average time from injury to surgery was 7.5 days. Surgical treatment for all patients consisted of ankle arthroscopy and debridement, followed by fibula fixation with plate and screws, syndesmotic fixation with suture-button devices, and open deltoid complex repair with suture anchors. Patient demographics were recorded with position played, time from injury to surgery, games played before and after surgery, ability to return to play, and postoperative complications. Return to play was defined as the ability to successfully participate in at least 1 full regular-season NFL game after surgery. All NFL players were able to return to running and cutting maneuvers by 6 months after surgery. There were no significant differences in playing experience before surgery versus after surgery. Average playing experience before surgery was 3.3 seasons, 39 games played, and 22 games started. Average playing experience after surgery was 1.6 seasons, 16 games played, and 15 games started. Return to play was 86% for all players. There were no intraoperative or postoperative complications noted, and no players had clinical evidence of medial pain or instability at final follow-up with radiographic maintenance of anatomic mortise alignment. Superficial deltoid complex avulsion during high-energy ankle fractures in athletes is a distinct injury pattern that should be recognized and may benefit from primary open repair. The majority of NFL players treated surgically for this injury pattern are able to return to play after surgery with no reported complications or persistent medial ankle pain or instability. Level IV, retrospective case series. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
Wahl, N; Hennig, P; Wieser, H P; Bangert, M
2017-06-26
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU [Formula: see text] min). The resulting standard deviation (expectation value) of dose show average global [Formula: see text] pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity, while run-times of sampling-based computations are linear in the number of fractions. Using sum sampling within APM, uncertainty propagation can only be accelerated at the cost of reduced accuracy in variance calculations. For probabilistic plan optimization, we were able to approximate the necessary pre-computations within seconds, yielding treatment plans of similar quality as gained from exact uncertainty propagation. APM is suited to enhance the trade-off between speed and accuracy in uncertainty propagation and probabilistic treatment plan optimization, especially in the context of fractionation. This brings fully-fledged APM computations within reach of clinical application.
NASA Astrophysics Data System (ADS)
Abdul Ghani, B.
2005-09-01
"TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.
Evolution of CMS workload management towards multicore job support
NASA Astrophysics Data System (ADS)
Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.
2015-12-01
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.
Evolution of CMS Workload Management Towards Multicore Job Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
Cognitive task analysis-based design and authoring software for simulation training.
Munro, Allen; Clark, Richard E
2013-10-01
The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Real-time polarization imaging algorithm for camera-based polarization navigation sensors.
Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli
2017-04-10
Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Open source integrated modeling environment Delta Shell
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.
2012-04-01
In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.
NASA Astrophysics Data System (ADS)
Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin
2012-08-01
Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.
Dickerson, Jane A; Schmeling, Michael; Hoofnagle, Andrew N; Hoffman, Noah G
2013-01-16
Mass spectrometry provides a powerful platform for performing quantitative, multiplexed assays in the clinical laboratory, but at the cost of increased complexity of analysis and quality assurance calculations compared to other methodologies. Here we describe the design and implementation of a software application that performs quality control calculations for a complex, multiplexed, mass spectrometric analysis of opioids and opioid metabolites. The development and implementation of this application improved our data analysis and quality assurance processes in several ways. First, use of the software significantly improved the procedural consistency for performing quality control calculations. Second, it reduced the amount of time technologists spent preparing and reviewing the data, saving on average over four hours per run, and in some cases improving turnaround time by a day. Third, it provides a mechanism for coupling procedural and software changes with the results of each analysis. We describe several key details of the implementation including the use of version control software and automated unit tests. These generally useful software engineering principles should be considered for any software development project in the clinical lab. Copyright © 2012 Elsevier B.V. All rights reserved.
Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1999-01-01
The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.
Velocity changes, long runs, and reversals in the Chromatium minus swimming response.
Mitchell, J G; Martinez-Alonso, M; Lalucat, J; Esteve, I; Brown, S
1991-01-01
The velocity, run time, path curvature, and reorientation angle of Chromatium minus were measured as a function of light intensity, temperature, viscosity, osmotic pressure, and hydrogen sulfide concentration. C. minus changed both velocity and run time. Velocity decreased with increasing light intensity in sulfide-depleted cultures and increased in sulfide-replete cultures. The addition of sulfide to cultures grown at low light intensity (10 microeinsteins m-2 s-1) caused mean run times to increase from 10.5 to 20.6 s. The addition of sulfide to cultures grown at high light intensity (100 microeinsteins m-2 s-1) caused mean run times to decrease from 15.3 to 7.7 s. These changes were maintained for up to an hour and indicate that at least some members of the family Chromatiaceae simultaneously modulate velocity and turning frequency for extended periods as part of normal taxis. Images PMID:1991736
ECO fill: automated fill modification to support late-stage design changes
NASA Astrophysics Data System (ADS)
Davis, Greg; Wilson, Jeff; Yu, J. J.; Chiu, Anderson; Chuang, Yao-Jen; Yang, Ricky
2014-03-01
One of the most critical factors in achieving a positive return for a design is ensuring the design not only meets performance specifications, but also produces sufficient yield to meet the market demand. The goal of design for manufacturability (DFM) technology is to enable designers to address manufacturing requirements during the design process. While new cell-based, DP-aware, and net-aware fill technologies have emerged to provide the designer with automated fill engines that support these new fill requirements, design changes that arrive late in the tapeout process (as engineering change orders, or ECOs) can have a disproportionate effect on tapeout schedules, due to the complexity of replacing fill. If not handled effectively, the impacts on file size, run time, and timing closure can significantly extend the tapeout process. In this paper, the authors examine changes to design flow methodology, supported by new fill technology, that enable efficient, fast, and accurate adjustments to metal fill late in the design process. We present an ECO fill methodology coupled with the support of advanced fill tools that can quickly locate the portion of the design affected by the change, remove and replace only the fill in that area, while maintaining the fill hierarchy. This new fill approach effectively reduces run time, contains fill file size, minimizes timing impact, and minimizes mask costs due to ECO-driven fill changes, all of which are critical factors to ensuring time-to-market schedules are maintained.
Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Lepers, Romuald
2011-08-01
In recent studies, a relationship between both low body fat and low thicknesses of selected skinfolds has been demonstrated for running performance of distances from 100 m to the marathon but not in ultramarathon. We investigated the association of anthropometric and training characteristics with race performance in 63 male recreational ultrarunners in a 24-hour run using bi and multivariate analysis. The athletes achieved an average distance of 146.1 (43.1) km. In the bivariate analysis, body mass (r = -0.25), the sum of 9 skinfolds (r = -0.32), the sum of upper body skinfolds (r = -0.34), body fat percentage (r = -0.32), weekly kilometers ran (r = 0.31), longest training session before the 24-hour run (r = 0.56), and personal best marathon time (r = -0.58) were related to race performance. Stepwise multiple regression showed that both the longest training session before the 24-hour run (p = 0.0013) and the personal best marathon time (p = 0.0015) had the best correlation with race performance. Performance in these 24-hour runners may be predicted (r2 = 0.46) by the following equation: Performance in a 24-hour run, km) = 234.7 + 0.481 (longest training session before the 24-hour run, km) - 0.594 (personal best marathon time, minutes). For practical applications, training variables such as volume and intensity were associated with performance but not anthropometric variables. To achieve maximum kilometers in a 24-hour run, recreational ultrarunners should have a personal best marathon time of ∼3 hours 20 minutes and complete a long training run of ∼60 km before the race, whereas anthropometric characteristics such as low body fat or low skinfold thicknesses showed no association with performance.
M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting
NASA Astrophysics Data System (ADS)
Abgarmi, Bizhan; Ozacar, A. Arda
2017-04-01
Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for additional capabilities or for other applications.
Relationship between 1.5-mile run time, injury risk and training outcome in British Army recruits.
Hall, Lianne J
2017-12-01
1.5-mile run time, as a surrogate measure of aerobic fitness, is associated with musculoskeletal injury (MSI) risk in military recruits. This study aimed to determine if 1.5-mile run times can predict injury risk and attrition rates from phase 1 (initial) training and determine if a link exists between phase 1 and 2 discharge outcomes in British Army recruits. 1.5-mile times from week 1 of initial training and MSI reported during training were retrieved for 3446 male recruits. Run times were examined against injury occurrence and training outcomes for 3050 recruits, using a Binary Logistic Regression and χ 2 analysis. The 1.5-mile run can predict injury risk and phase 1 attrition rates (χ 2 (1)=59.3 p<0.001, χ 2 (1)=66.873 p<0.001). Slower 1.5-mile run times were associated with higher injury occurrence (χ 2 (1)=59.3 p<0.001) and reduced phase 1 ( χ 2 104.609 a p<0.001) and 2 (χ 2 84.978 a p<0.001) success. The 1.5-mile run can be used to guide a future standard that will in turn help reduce injury occurrence and improve training success. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Effect of match-run frequencies on the number of transplants and waiting times in kidney exchange.
Ashlagi, Itai; Bingaman, Adam; Burq, Maximilien; Manshadi, Vahideh; Gamarnik, David; Murphey, Cathi; Roth, Alvin E; Melcher, Marc L; Rees, Michael A
2018-05-01
Numerous kidney exchange (kidney paired donation [KPD]) registries in the United States have gradually shifted to high-frequency match-runs, raising the question of whether this harms the number of transplants. We conducted simulations using clinical data from 2 KPD registries-the Alliance for Paired Donation, which runs multihospital exchanges, and Methodist San Antonio, which runs single-center exchanges-to study how the frequency of match-runs impacts the number of transplants and the average waiting times. We simulate the options facing each of the 2 registries by repeated resampling from their historical pools of patient-donor pairs and nondirected donors, with arrival and departure rates corresponding to the historical data. We find that longer intervals between match-runs do not increase the total number of transplants, and that prioritizing highly sensitized patients is more effective than waiting longer between match-runs for transplanting highly sensitized patients. While we do not find that frequent match-runs result in fewer transplanted pairs, we do find that increasing arrival rates of new pairs improves both the fraction of transplanted pairs and waiting times. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
Burt, Dean; Lamb, Kevin; Nicholas, Ceri; Twist, Craig
2015-07-01
This study examined whether lower-volume exercise-induced muscle damage (EIMD) performed 2 weeks before high-volume muscle-damaging exercise protects against its detrimental effect on running performance. Sixteen male participants were randomly assigned to a lower-volume (five sets of ten squats, n = 8) or high-volume (ten sets of ten squats, n = 8) EIMD group and completed baseline measurements for muscle soreness, knee extensor torque, creatine kinase (CK), a 5-min fixed-intensity running bout and a 3-km running time-trial. Measurements were repeated 24 and 48 h after EIMD, and the running time-trial after 48 h. Two weeks later, both groups repeated the baseline measurements, ten sets of ten squats and the same follow-up testing (Bout 2). Data analysis revealed increases in muscle soreness and CK and decreases in knee extensor torque 24-48 h after the initial bouts of EIMD. Increases in oxygen uptake [Formula: see text], minute ventilation [Formula: see text] and rating of perceived exertion were observed during fixed-intensity running 24-48 h after EIMD Bout 1. Likewise, time increased and speed and [Formula: see text] decreased during a 3-km running time-trial 48 h after EIMD. Symptoms of EIMD, responses during fixed-intensity and running time-trial were attenuated in the days after the repeated bout of high-volume EIMD performed 2 weeks after the initial bout. This study demonstrates that the protective effect of lower-volume EIMD on subsequent high-volume EIMD is transferable to endurance running. Furthermore, time-trial performance was found to be preserved after a repeated bout of EIMD.
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
Toedebusch, Ryan G.; Roberts, Christian K.; Roberts, Michael D.; Booth, Frank W.
2015-01-01
In maturing rats, the growth of abdominal fat is attenuated by voluntary wheel running. After the cessation of running by wheel locking, a rapid increase in adipose tissue growth to a size that is similar to rats that have never run (i.e. catch-up growth) has been previously reported by our lab. In contrast, diet-induced increases in adiposity have a slower onset with relatively delayed transcriptomic responses. The purpose of the present study was to identify molecular pathways associated with the rapid increase in adipose tissue after ending 6 wks of voluntary running at the time of puberty. Age-matched, male Wistar rats were given access to running wheels from 4 to 10 weeks of age. From the 10th to 11th week of age, one group of rats had continued wheel access, while the other group had one week of wheel locking. Perirenal adipose tissue was extracted, RNA sequencing was performed, and bioinformatics analyses were executed using Ingenuity Pathway Analysis (IPA). IPA was chosen to assist in the understanding of complex ‘omics data by integrating data into networks and pathways. Wheel locked rats gained significantly more fat mass and significantly increased body fat percentage between weeks 10–11 despite having decreased food intake, as compared to rats with continued wheel access. IPA identified 646 known transcripts differentially expressed (p < 0.05) between continued wheel access and wheel locking. In wheel locked rats, IPA revealed enrichment of transcripts for the following functions: extracellular matrix, macrophage infiltration, immunity, and pro-inflammatory. These findings suggest that increases in visceral adipose tissue that accompanies the cessation of pubertal physical activity are associated with the alteration of multiple pathways, some of which may potentiate the development of pubertal obesity and obesity-associated systemic low-grade inflammation that occurs later in life. PMID:26678390
Ruegsegger, Gregory N; Company, Joseph M; Toedebusch, Ryan G; Roberts, Christian K; Roberts, Michael D; Booth, Frank W
2015-01-01
In maturing rats, the growth of abdominal fat is attenuated by voluntary wheel running. After the cessation of running by wheel locking, a rapid increase in adipose tissue growth to a size that is similar to rats that have never run (i.e. catch-up growth) has been previously reported by our lab. In contrast, diet-induced increases in adiposity have a slower onset with relatively delayed transcriptomic responses. The purpose of the present study was to identify molecular pathways associated with the rapid increase in adipose tissue after ending 6 wks of voluntary running at the time of puberty. Age-matched, male Wistar rats were given access to running wheels from 4 to 10 weeks of age. From the 10th to 11th week of age, one group of rats had continued wheel access, while the other group had one week of wheel locking. Perirenal adipose tissue was extracted, RNA sequencing was performed, and bioinformatics analyses were executed using Ingenuity Pathway Analysis (IPA). IPA was chosen to assist in the understanding of complex 'omics data by integrating data into networks and pathways. Wheel locked rats gained significantly more fat mass and significantly increased body fat percentage between weeks 10-11 despite having decreased food intake, as compared to rats with continued wheel access. IPA identified 646 known transcripts differentially expressed (p < 0.05) between continued wheel access and wheel locking. In wheel locked rats, IPA revealed enrichment of transcripts for the following functions: extracellular matrix, macrophage infiltration, immunity, and pro-inflammatory. These findings suggest that increases in visceral adipose tissue that accompanies the cessation of pubertal physical activity are associated with the alteration of multiple pathways, some of which may potentiate the development of pubertal obesity and obesity-associated systemic low-grade inflammation that occurs later in life.
Hulme, A; Salmon, P M; Nielsen, R O; Read, G J M; Finch, C F
2017-11-01
There is a need for an ecological and complex systems approach for better understanding the development and prevention of running-related injury (RRI). In a previous article, we proposed a prototype model of the Australian recreational distance running system which was based on the Systems Theoretic Accident Mapping and Processes (STAMP) method. That model included the influence of political, organisational, managerial, and sociocultural determinants alongside individual-level factors in relation to RRI development. The purpose of this study was to validate that prototype model by drawing on the expertise of both systems thinking and distance running experts. This study used a modified Delphi technique involving a series of online surveys (December 2016- March 2017). The initial survey was divided into four sections containing a total of seven questions pertaining to different features associated with the prototype model. Consensus in opinion about the validity of the prototype model was reached when the number of experts who agreed or disagreed with survey statement was ≥75% of the total number of respondents. A total of two Delphi rounds was needed to validate the prototype model. Out of a total of 51 experts who were initially contacted, 50.9% (n = 26) completed the first round of the Delphi, and 92.3% (n = 24) of those in the first round participated in the second. Most of the 24 full participants considered themselves to be a running expert (66.7%), and approximately a third indicated their expertise as a systems thinker (33.3%). After the second round, 91.7% of the experts agreed that the prototype model was a valid description of the Australian distance running system. This is the first study to formally examine the development and prevention of RRI from an ecological and complex systems perspective. The validated model of the Australian distance running system facilitates theoretical advancement in terms of identifying practical system-wide opportunities for the implementation of sustainable RRI prevention interventions. This 'big picture' perspective represents the first step required when thinking about the range of contributory causal factors that affect other system elements, as well as runners' behaviours in relation to RRI risk. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tabraiz, Shamas; Haydar, Sajjad; Sallis, Paul; Nasreen, Sadia; Mahmood, Qaisar; Awais, Muhammad; Acharya, Kishor
2017-08-01
Intermittent backwashing and relaxation are mandatory in the membrane bioreactor (MBR) for its effective operation. The objective of the current study was to evaluate the effects of run-relaxation and run-backwash cycle time on fouling rates. Furthermore, comparison of the effects of backwashing and relaxation on the fouling behavior of membrane in high rate submerged MBR. The study was carried out on a laboratory scale MBR at high flux (30 L/m 2 ·h), treating sewage. The MBR was operated at three relaxation operational scenarios by keeping the run time to relaxation time ratio constant. Similarly, the MBR was operated at three backwashing operational scenarios by keeping the run time to backwashing time ratio constant. The results revealed that the provision of relaxation or backwashing at small intervals prolonged the MBR operation by reducing fouling rates. The cake and pores fouling rates in backwashing scenarios were far less as compared to the relaxation scenarios, which proved backwashing a better option as compared to relaxation. The operation time of backwashing scenario (lowest cycle time) was 64.6% and 21.1% more as compared to continuous scenario and relaxation scenario (lowest cycle time), respectively. Increase in cycle time increased removal efficiencies insignificantly, in both scenarios of relaxation and backwashing.
CRANS - CONFIGURABLE REAL-TIME ANALYSIS SYSTEM
NASA Technical Reports Server (NTRS)
Mccluney, K.
1994-01-01
In a real-time environment, the results of changes or failures in a complex, interconnected system need evaluation quickly. Tabulations showing the effects of changes and/or failures of a given item in the system are generally only useful for a single input, and only with regard to that item. Subsequent changes become harder to evaluate as combinations of failures produce a cascade effect. When confronted by multiple indicated failures in the system, it becomes necessary to determine a single cause. In this case, failure tables are not very helpful. CRANS, the Configurable Real-time ANalysis System, can interpret a logic tree, constructed by the user, describing a complex system and determine the effects of changes and failures in it. Items in the tree are related to each other by Boolean operators. The user is then able to change the state of these items (ON/OFF FAILED/UNFAILED). The program then evaluates the logic tree based on these changes and determines any resultant changes to other items in the tree. CRANS can also search for a common cause for multiple item failures, and allow the user to explore the logic tree from within the program. A "help" mode and a reference check provide the user with a means of exploring an item's underlying logic from within the program. A commonality check determines single point failures for an item or group of items. Output is in the form of a user-defined matrix or matrices of colored boxes, each box representing an item or set of items from the logic tree. Input is via mouse selection of the matrix boxes, using the mouse buttons to toggle the state of the item. CRANS is written in C-language and requires the MIT X Window System, Version 11 Revision 4 or Revision 5. It requires 78K of RAM for execution and a three button mouse. It has been successfully implemented on Sun4 workstations running SunOS, HP9000 workstations running HP-UX, and DECstations running ULTRIX. No executable is provided on the distribution medium; however, a sample makefile is included. Sample input files are also included. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. This program was developed in 1992.
Herzog, Hanna
2006-06-01
This article discusses the theoretical claims that 'gender', 'religion' and 'state' are not universal nor essentialist entities, but rather contingent phenomena embedded in time, place, and changing historical circumstances. Historical analysis of social processes reveals the complex relations between the three categories, as they individually and as a whole are re/constituted as changing co-tangential and often unpredictable phenomena. One case study presented in this article that of state-run religious schools in Israel demonstrates how state, religion and gender intersect. Through the analysis presented here, we see examples of the permeable boundaries between these social categories as well as the inter-relationships and unintended consequences of the interplay between the three. Paradoxically, graduates of these schools, especially women, have evolved from being members of a marginalized - even ignored - social category, to being active participants in the religious and political life of their community and in the political struggle over state policy regarding the future of the Jewish settlements in the West Bank.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Tessutti, Vitor; Ribeiro, Ana Paula; Trombini-Souza, Francis; Sacco, Isabel C N
2012-01-01
The practice of running has consistently increased worldwide, and with it, related lower limb injuries. The type of running surface has been associated with running injury etiology, in addition other factors, such as the relationship between the amount and intensity of training. There is still controversy in the literature regarding the biomechanical effects of different types of running surfaces on foot-floor interaction. The aim of this study was to investigate the influence of running on asphalt, concrete, natural grass, and rubber on in-shoe pressure patterns in adult recreational runners. Forty-seven adult recreational runners ran twice for 40 m on all four different surfaces at 12 ± 5% km · h(-1). Peak pressure, pressure-time integral, and contact time were recorded by Pedar X insoles. Asphalt and concrete were similar for all plantar variables and pressure zones. Running on grass produced peak pressures 9.3% to 16.6% lower (P < 0.001) than the other surfaces in the rearfoot and 4.7% to 12.3% (P < 0.05) lower in the forefoot. The contact time on rubber was greater than on concrete for the rearfoot and midfoot. The behaviour of rubber was similar to that obtained for the rigid surfaces - concrete and asphalt - possibly because of its time of usage (five years). Running on natural grass attenuates in-shoe plantar pressures in recreational runners. If a runner controls the amount and intensity of practice, running on grass may reduce the total stress on the musculoskeletal system compared with the total musculoskeletal stress when running on more rigid surfaces, such as asphalt and concrete.
Tachinardi, Patricia; Tøien, Øivind; Valentinuzzi, Veronica S.; Buck, C. Loren; Oda, Gisele A.
2015-01-01
Several rodent species that are diurnal in the field become nocturnal in the lab. It has been suggested that the use of running-wheels in the lab might contribute to this timing switch. This proposition is based on studies that indicate feed-back of vigorous wheel-running on the period and phase of circadian clocks that time daily activity rhythms. Tuco-tucos (Ctenomys aff. knighti) are subterranean rodents that are diurnal in the field but are robustly nocturnal in laboratory, with or without access to running wheels. We assessed their energy metabolism by continuously and simultaneously monitoring rates of oxygen consumption, body temperature, general motor and wheel running activity for several days in the presence and absence of wheels. Surprisingly, some individuals spontaneously suppressed running-wheel activity and switched to diurnality in the respirometry chamber, whereas the remaining animals continued to be nocturnal even after wheel removal. This is the first report of timing switches that occur with spontaneous wheel-running suppression and which are not replicated by removal of the wheel. PMID:26460828
High-performance reconfigurable hardware architecture for restricted Boltzmann machines.
Ly, Daniel Le; Chow, Paul
2010-11-01
Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.
NASA Astrophysics Data System (ADS)
Bytev, Vladimir V.; Kniehl, Bernd A.
2016-09-01
We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.
Aladin Lite: Embed your Sky in the Browser
NASA Astrophysics Data System (ADS)
Boch, T.; Fernique, P.
2014-05-01
I will introduce and describe Aladin Lite1, a lightweight interactive sky viewer running natively in the browser. The past five years have seen the emergence of powerful and complex web applications, thanks to major improvements in JavaScript engines and the advent of HTML5. At the same time, browser plugins Java applets, Flash, Silverlight) that were commonly used to run rich Internet applications are declining and are not well suited for mobile devices. The Aladin team took this opportunity to develop Aladin Lite, a lightweight version of Aladin geared towards simple visualization of a sky region. Relying on the widely supported HTML5 canvas element, it provides an intuitive user interface running on desktops and tablets. This first version allows one to interactively visualize multi-resolution HEALPix image and superimpose tabular data and footprints. Aladin Lite is easily embeddable on any web page and may be of interest for data providers which will be able to use it as an interactive previewer for their own image surveys, previously pre-processed as explained in details in the poster "Create & publish your Hierarchical Progressive Survey". I will present the main features of Aladin Lite as well as the JavaScript API which gives the building blocks to create rich interactions between a web page and Aladin Lite.
The swimming behavior of flagellated bacteria in viscous and viscoelastic media
NASA Astrophysics Data System (ADS)
Qu, Zijie; Henderikx, Rene; Breuer, Kenneth
2016-11-01
The motility of bacteria E.coli in viscous and viscoelastic fluids has been widely studied although full understanding remains elusive. The swimming mode of wild-type E.coli is well-described by a run-and-tumble sequence in which periods of straight swimming at a constant speed are randomly interrupted by a tumble, defined as a sudden change of direction with a very low speed. Using a tracking microscope, we follow cells for extended periods of time and find that the swimming behavior can be more complex, and can include a wider variety of behaviors including a "slow random walk" in which the cells move at relatively low speed without the characteristic run. Significant variation between individual cells is observed, and furthermore, a single cell can change its motility during the course of a tracking event. Changing the viscosity and viscoelasticy of the swimming media also has profound effects on the average swimming speed and run-tumble nature of the cell motility, including changing the distribution, duration of tumbling and slow random walk events. The reasons for these changes are explained using a Purcell-style resistive force model for the cell and flagellar behavior as well as model for the changes in flagellar bundling in different fluid viscosities. National Science Foundation.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2003-07-01
During ESCOMPTE precampaign (15 June to 10 July 2000), three days of intensive pollution (IOP0) have been observed and simulated. The comprehensive RAMS model, version 4.3, coupled online with a chemical module including 29 species, has been used to follow the chemistry of the zone polluted over southern France. This online method can be used because the code is paralleled and the SGI 3800 computer is very powerful. Two runs have been performed: run1 with one grid and run2 with two nested grids. The redistribution of simulated chemical species (ozone, carbon monoxide, sulphur dioxide and nitrogen oxides) was compared to aircraft measurements and surface stations. The 2-grid run has given substantially better results than the one-grid run only because the former takes the outer pollutants into account. This online method helps to explain dynamics and to retrieve the chemical species redistribution with a good agreement.
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
Can anti-gravity running improve performance to the same degree as over-ground running?
Brennan, Christopher T; Jenkins, David G; Osborne, Mark A; Oyewale, Michael; Kelly, Vincent G
2018-03-11
This study examined the changes in running performance, maximal blood lactate concentrations and running kinematics between 85%BM anti-gravity (AG) running and normal over-ground (OG) running over an 8-week training period. Fifteen elite male developmental cricketers were assigned to either the AG or over-ground (CON) running group. The AG group (n = 7) ran twice a week on an AG treadmill and once per week over-ground. The CON group (n = 8) completed all sessions OG on grass. Both AG and OG training resulted in similar improvements in time trial and shuttle run performance. Maximal running performance showed moderate differences between the groups, however the AG condition resulted in less improvement. Large differences in maximal blood lactate concentrations existed with OG running resulting in greater improvements in blood lactate concentrations measured during maximal running. Moderate increases in stride length paired with moderate decreases in stride rate also resulted from AG training. The use of AG training to supplement regular OG training for performance should be used cautiously, as extended use over long periods of time could lead to altered stride mechanics and reduced blood lactate.
Senter, Evan; Sheikh, Saad; Dotu, Ivan; Ponty, Yann; Clote, Peter
2012-01-01
Using complex roots of unity and the Fast Fourier Transform, we design a new thermodynamics-based algorithm, FFTbor, that computes the Boltzmann probability that secondary structures differ by base pairs from an arbitrary initial structure of a given RNA sequence. The algorithm, which runs in quartic time and quadratic space , is used to determine the correlation between kinetic folding speed and the ruggedness of the energy landscape, and to predict the location of riboswitch expression platform candidates. A web server is available at http://bioinformatics.bc.edu/clotelab/FFTbor/. PMID:23284639
NASA Technical Reports Server (NTRS)
Trosin, J.
1985-01-01
Use of the Display AButments (DAB) which plots PAN AIR geometries is presented. The DAB program creates hidden line displays of PAN AIR geometries and labels specified geometry components, such as abutments, networks, and network edges. It is used to alleviate the very time consuming and error prone abutment list checking phase of developing a valid PAN AIR geometry, and therefore represents a valuable tool for debugging complex PAN AIR geometry definitions. DAB is written in FORTRAN 77 and runs on a Digital Equipment Corporation VAX 11/780 under VMS. It utilizes a special color version of the SKETCH hidden line analysis routine.
Multiple Equilibria and Endogenous Cycles in a Non-Linear Harrodian Growth Model
NASA Astrophysics Data System (ADS)
Commendatore, Pasquale; Michetti, Elisabetta; Pinto, Antonio
The standard result of Harrod's growth model is that, because investors react more strongly than savers to a change in income, the long run equilibrium of the economy is unstable. We re-interpret the Harrodian instability puzzle as a local instability problem and integrate his model with a nonlinear investment function. Multiple equilibria and different types of complex behaviour emerge. Moreover, even in the presence of locally unstable equilibria, for a large set of initial conditions the time path of the economy is not diverging, providing a solution to the instability puzzle.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
Code of Federal Regulations, 2011 CFR
2011-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Differential geometric treewidth estimation in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Wang, Chi; Jonckheere, Edmond; Brun, Todd
2016-10-01
The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.
NASA Technical Reports Server (NTRS)
Gryphon, Coranth D.; Miller, Mark D.
1991-01-01
PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.
Oudenhoven, Laura M; Boes, Judith M; Hak, Laura; Faber, Gert S; Houdijk, Han
2017-01-25
Running specific prostheses (RSP) are designed to replicate the spring-like behaviour of the human leg during running, by incorporating a real physical spring in the prosthesis. Leg stiffness is an important parameter in running as it is strongly related to step frequency and running economy. To be able to select a prosthesis that contributes to the required leg stiffness of the athlete, it needs to be known to what extent the behaviour of the prosthetic leg during running is dominated by the stiffness of the prosthesis or whether it can be regulated by adaptations of the residual joints. The aim of this study was to investigate whether and how athletes with an RSP could regulate leg stiffness during distance running at different step frequencies. Seven endurance runners with an unilateral transtibial amputation performed five running trials on a treadmill at a fixed speed, while different step frequencies were imposed (preferred step frequency (PSF) and -15%, -7.5%, +7.5% and +15% of PSF). Among others, step time, ground contact time, flight time, leg stiffness and joint kinetics were measured for both legs. In the intact leg, increasing step frequency was accompanied by a decrease in both contact and flight time, while in the prosthetic leg contact time remained constant and only flight time decreased. In accordance, leg stiffness increased in the intact leg, but not in the prosthetic leg. Although a substantial contribution of the residual leg to total leg stiffness was observed, this contribution did not change considerably with changing step frequency. Amputee athletes do not seem to be able to alter prosthetic leg stiffness to regulate step frequency during running. This invariant behaviour indicates that RSP stiffness has a large effect on total leg stiffness and therefore can have an important influence on running performance. Nevertheless, since prosthetic leg stiffness was considerably lower than stiffness of the RSP, compliance of the residual leg should not be ignored when selecting RSP stiffness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Knechtle, Beat; Knechtle, Patrizia; Rosemann, Thomas; Senn, Oliver
2010-12-01
The purpose of this study was to investigate the association between selected skin-fold thicknesses and training variables with a half-marathon race time, for both male and female recreational runners, using bi- and multivariate analysis. In 52 men, two skin-fold thicknesses (abdominal and calf) were significantly and positively correlated with race time; whereas in 15 women, five (pectoral, mid-axilla, subscapular, abdominal, and suprailiac) showed positive and significant relations with total race time. In men, the mean weekly running distance, minimum distance run per week, maximum distance run per week, mean weekly hours of running, number of running training sessions per week, and mean speed of the training sessions were significantly and negatively related to total race time, but not in women. Interaction analyses suggested that race time was more strongly associated with anthropometry in women than men. Race time for the women was independently associated with the sum of eight skin-folds; but for the men, only the mean speed during training sessions was independently associated. Skin-fold thicknesses and training variables in these groups were differently related to race time according to their sex.
American Academy of Podiatric Sports Medicine
... Runblogger Running Product Reviews Running Research Junkie Running Times The ... © American Academy of Podiatric Sports Medicine Website Design, Maintenance and Hosting by Catalyst Marketing / Worry Free ...
Hébert-Losier, Kim; Jensen, Kurt; Holmberg, Hans-Christer
2014-11-01
Jumping and hopping are used to measure lower-body muscle power, stiffness, and stretch-shortening-cycle utilization in sports, with several studies reporting correlations between such measures and sprinting and/or running abilities in athletes. Neither jumping and hopping nor correlations with sprinting and/or running have been examined in orienteering athletes. The authors investigated squat jump (SJ), countermovement jump (CMJ), standing long jump (SLJ), and hopping performed by 8 elite and 8 amateur male foot-orienteering athletes (29 ± 7 y, 183 ± 5 cm, 73 ± 7 kg) and possible correlations to road, path, and forest running and sprinting performance, as well as running economy, velocity at anaerobic threshold, and peak oxygen uptake (VO(2peak)) from treadmill assessments. During SJs and CMJs, elites demonstrated superior relative peak forces, times to peak force, and prestretch augmentation, albeit lower SJ heights and peak powers. Between-groups differences were unclear for CMJ heights, hopping stiffness, and most SLJ parameters. Large pairwise correlations were observed between relative peak and time to peak forces and sprinting velocities; time to peak forces and running velocities; and prestretch augmentation and forest-running velocities. Prestretch augmentation and time to peak forces were moderately correlated to VO(2peak). Correlations between running economy and jumping or hopping were small or trivial. Overall, the elites exhibited superior stretch-shortening-cycle utilization and rapid generation of high relative maximal forces, especially vertically. These functional measures were more closely related to sprinting and/or running abilities, indicating benefits of lower-body training in orienteering.
Kim, Seungsuk
2017-08-01
[Purpose] This study aimed to analyze the effects of complex training on carbon monoxide, cardiorespiratory function, and body mass among college students with the highest smoking rate among all age group. [Subjects and Methods] A total of 40 college students voluntarily participated in this study. All subjects smoked and were randomly divided into two groups: the experimental group (N=20) and the control group (N=20). The experimental group underwent complex training (30 min of training five times a week for 12 weeks) while the control group did not participate in such training. The complex training consisted of two parts: aerobic exercise (walking and running) and resistance exercise (weight training). [Results] Two-way ANOVA with repeated measures revealed significant interactions among CO, VO2max, HRmax, VEmax, body fat, and skeletal muscle mass, indicating that the changes were significantly different among groups. [Conclusion] A 12 week of complex physical exercise program would be an effective way to support a stop-smoking campaign as it quickly eliminates CO from the body and improves cardiorespiratory function and body condition.
β-Cyclodextrin inclusion complex: preparation, characterization, and its aspirin release in vitro
NASA Astrophysics Data System (ADS)
Zhou, Hui-Yun; Jiang, Ling-Juan; Zhang, Yan-Ping; Li, Jun-Bo
2012-09-01
In this work, the optimal clathration condition was investigated for the preparation of aspirin-β-cyclodextrin (Asp-β-CD) inclusion complex using design of experiment (DOE) methodology. A 3-level, 3-factor Box-Behnken design with a total of 17 experimental runs was used. The Asp-β-CD inclusion complex was prepared by saturated solution method. The influence on the embedding rate was investigated, including molar ratio of β-CD to Asp, clathration temperature and clathration time, and the optimum values of such three test variables were found to be 0.82, 49°C and 2.0 h, respectively. The embedding rate could be up to 61.19%. The formation of the bonding between -COOH group of Asp and O-H group of β-CD might play an important role in the process of clathration according to FT-IR spectra. Release kinetics of Asp from inclusion complex was studied for the evaluation of drug release mechanism and diffusion coefficients. The results showed that the drug release from matrix occurred through Fickian diffusion mechanism. The cumulative release of Asp reached only 40% over 24 h, so the inclusion complex could potentially be applied as a long-acting delivery system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This technical note describes the current capabilities and availability of the Automated Dredging and Disposal Alternatives Management System (ADDAMS). The technical note replaces the earlier Technical Note EEDP-06-12, which should be discarded. Planning, design, and management of dredging and dredged material disposal projects often require complex or tedious calculations or involve complex decision-making criteria. In addition, the evaluations often must be done for several disposal alternatives or disposal sites. ADDAMS is a personal computer (PC)-based system developed to assist in making such evaluations in a timely manner. ADDAMS contains a collection of computer programs (applications) designed to assist in managingmore » dredging projects. This technical note describes the system, currently available applications, mechanisms for acquiring and running the system, and provisions for revision and expansion.« less
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Use of Continuous Integration Tools for Application Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B
High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less
CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.
Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan
2017-06-24
The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .
Willson, John D; Bjorhus, Jordan S; Williams, D S Blaise; Butler, Robert J; Porcari, John P; Kernozek, Thomas W
2014-01-01
Minimalistic footwear has garnered widespread interest in the running community, based largely on the premise that the footwear may reduce certain running-related injury risk factors through adaptations in running mechanics and foot strike pattern. To examine short-term adaptations in running mechanics among runners who typically run in conventional cushioned heel running shoes as they transition to minimalistic footwear. A 2-week, prospective, observational study. A movement science laboratory. Nineteen female runners with a rear foot strike (RFS) pattern who usually train in conventional running shoes. The participants trained for 20 minutes, 3 times per week for 2 weeks by using minimalistic footwear. Three-dimensional lower extremity running mechanics were analyzed before and after this 2-week period. Hip, knee, and ankle joint kinematics at initial contact; step length; stance time; peak ankle joint moment and joint work; impact peak; vertical ground reaction force loading rate; and foot strike pattern preference were evaluated before and after the intervention. The knee flexion angle at initial contact increased 3.8° (P < .01), but the ankle and hip flexion angles at initial contact did not change after training. No changes in ankle joint kinetics or running temporospatial parameters were observed. The majority of participants (71%), before the intervention, demonstrated an RFS pattern while running in minimalistic footwear. The proportion of runners with an RFS pattern did not decrease after 2 weeks (P = .25). Those runners who chose an RFS pattern in minimalistic shoes experienced a vertical loading rate that was 3 times greater than those who chose to run with a non-RFS pattern. Few systematic changes in running mechanics were observed among participants after 2 weeks of training in minimalistic footwear. The majority of the participants continued to use an RFS pattern after training in minimalistic footwear, and these participants experienced higher vertical loading rates. Continued exposure to these greater loading rates may have detrimental effects over time. Copyright © 2014 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... Time at Which the Mortgage-Backed Securities Division Runs Its Daily Morning Pass September 26, 2012. I... FICC proposes to move the time at which its Mortgage-Backed Securities Division (``MBSD'') runs its... processing passes. MBSD currently runs its first processing pass of the day (historically referred to as the...
ERIC Educational Resources Information Center
Belke, T. W.; Mondona, A. R.; Conrad, K. M.; Poirier, K. F.; Pickering, K. L.
2008-01-01
Do rats run and respond at a higher rate to run during the dark phase when they are typically more active? To answer this question, Long Evans rats were exposed to a response-initiated variable interval 30-s schedule of wheel-running reinforcement during light and dark cycles. Wheel-running and local lever-pressing rates increased modestly during…
Lower-body determinants of running economy in male and female distance runners.
Barnes, Kyle R; Mcguigan, Michael R; Kilding, Andrew E
2014-05-01
A variety of training approaches have been shown to improve running economy in well-trained athletes. However, there is a paucity of data exploring lower-body determinants that may affect running economy and account for differences that may exist between genders. Sixty-three male and female distance runners were assessed in the laboratory for a range of metabolic, biomechanical, and neuromuscular measures potentially related to running economy (ml·kg(-1)·min(-1)) at a range of running speeds. At all common test velocities, women were more economical than men (effect size [ES] = 0.40); however, when compared in terms of relative intensity, men had better running economy (ES = 2.41). Leg stiffness (r = -0.80) and moment arm length (r = 0.90) were large-extremely largely correlated with running economy and each other (r = -0.82). Correlations between running economy and kinetic measures (peak force, peak power, and time to peak force) for both genders were unclear. The relationship in stride rate (r = -0.27 to -0.31) was in the opposite direction to that of stride length (r = 0.32-0.49), and the relationship in contact time (r = -0.21 to -0.54) was opposite of that of flight time (r = 0.06-0.74). Although both leg stiffness and moment arm length are highly related to running economy, it seems that no single lower-body measure can completely explain differences in running economy between individuals or genders. Running economy is therefore likely determined from the sum of influences from multiple lower-body attributes.
LUXSim: A component-centric approach to low-background simulations
Akerib, D. S.; Bai, X.; Bedikian, S.; ...
2012-02-13
Geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials. These simulations have mostly been run with a source beam outside the detector. In the case of low-background physics, however, a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves. From this standpoint, there is no single source or beam, but rather a collection of sources with potentially complicated spatial extent. LUXSim is a simulation framework used by the LUX collaboration that takes a component-centric approach to event generation and recording. A newmore » set of classes allows for multiple radioactive sources to be set within any number of components at run time, with the entire collection of sources handled within a single simulation run. Various levels of information can also be recorded from the individual components, with these record levels also being set at runtime. This flexibility in both source generation and information recording is possible without the need to recompile, reducing the complexity of code management and the proliferation of versions. Within the code itself, casting geometry objects within this new set of classes rather than as the default Geant4 classes automatically extends this flexibility to every individual component. No additional work is required on the part of the developer, reducing development time and increasing confidence in the results. Here, we describe the guiding principles behind LUXSim, detail some of its unique classes and methods, and give examples of usage.« less
The Influence of Running on Foot Posture and In-Shoe Plantar Pressures.
Bravo-Aguilar, María; Gijón-Noguerón, Gabriel; Luque-Suarez, Alejandro; Abian-Vicen, Javier
2016-03-01
Running can be considered a high-impact practice, and most people practicing continuous running experience lower-limb injuries. The aim of this study was to determine the influence of 45 min of running on foot posture and plantar pressures. The sample comprised 116 healthy adults (92 men and 24 women) with no foot-related injuries. The mean ± SD age of the participants was 28.31 ± 6.01 years; body mass index, 23.45 ± 1.96; and training time, 11.02 ± 4.22 h/wk. Outcome measures were collected before and after 45 min of running at an average speed of 12 km/h, and included the Foot Posture Index (FPI) and a baropodometric analysis. The results show that foot posture can be modified after 45 min of running. The mean ± SD FPI changed from 6.15 ± 2.61 to 4.86 ± 2.65 (P < .001). Significant decreases in mean plantar pressures in the external, internal, rearfoot, and forefoot edges were found after 45 min of running. Peak plantar pressures in the forefoot decreased after running. The pressure-time integral decreased during the heel strike phase in the internal edge of the foot. In addition, a decrease was found in the pressure-time integral during the heel-off phase in the internal and rearfoot edges. The findings suggest that after 45 min of running, a pronated foot tends to change into a more neutral position, and decreased plantar pressures were found after the run.
Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio
2015-01-01
Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio
2015-01-01
Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely.
Park, Peter J; Bell, M A
2010-06-01
We tested the hypothesis that increased telencephalon size has evolved in threespine stickleback fish (Gasterosteus aculeatus) from structurally complex habitats using field-caught samples from one sea-run (ancestral) and 18 ecologically diverse freshwater (descendant) populations. Freshwater habitats ranged from shallow, structurally complex lakes with benthic-foraging stickleback (benthics), to deeper, structurally simple lakes in which stickleback depend more heavily on plankton for prey (generalists). Contrary to our expectations, benthics had smaller telencephala than generalists, but the shape of the telencephalon of the sea-run and benthic populations were more convex laterally. Convex telencephalon shape may indicate enlargement of the dorsolateral region, which is homologous with the tetrapod hippocampus. Telencephalon morphology is also sexually dimorphic, with larger, less convex telencephala in males. Freshwater stickleback from structurally complex habitats have retained the ancestral telencephalon morphology, but populations that feed more in open habitats on plankton have evolved larger, laterally concave telencephala.
Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald
2012-01-01
Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. After multivariate regression, running speed of the training units (β = -0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r (2) = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) - 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners.
Automated Flight Dynamics Product Generation for the EOS AM-1 Spacecraft
NASA Technical Reports Server (NTRS)
Matusow, Carla
1999-01-01
As part of NASA's Earth Science Enterprise, the Earth Observing System (EOS) AM-1 spacecraft is designed to monitor long-term, global, environmental changes. Because of the complexity of the AM-1 spacecraft, the mission operations center requires more than 80 distinct flight dynamics products (reports). To create these products, the AM-1 Flight Dynamics Team (FDT) will use a combination of modified commercial software packages (e.g., Analytical Graphic's Satellite ToolKit) and NASA-developed software applications. While providing the most cost-effective solution to meeting the mission requirements, the integration of these software applications raises several operational concerns: (1) Routine product generation requires knowledge of multiple applications executing on variety of hardware platforms. (2) Generating products is a highly interactive process requiring a user to interact with each application multiple times to generate each product. (3) Routine product generation requires several hours to complete. (4) User interaction with each application introduces the potential for errors, since users are required to manually enter filenames and input parameters as well as run applications in the correct sequence. Generating products requires some level of flight dynamics expertise to determine the appropriate inputs and sequencing. To address these issues, the FDT developed an automation software tool called AutoProducts, which runs on a single hardware platform and provides all necessary coordination and communication among the various flight dynamics software applications. AutoProducts, autonomously retrieves necessary files, sequences and executes applications with correct input parameters, and deliver the final flight dynamics products to the appropriate customers. Although AutoProducts will normally generate pre-programmed sets of routine products, its graphical interface allows for easy configuration of customized and one-of-a-kind products. Additionally, AutoProducts has been designed as a mission-independent tool, and can be easily reconfigured to support other missions or incorporate new flight dynamics software packages. After the AM-1 launch, AutoProducts will run automatically at pre-determined time intervals . The AutoProducts tool reduces many of the concerns associated with the flight dynamics product generation. Although AutoProducts required a significant effort to develop because of the complexity of the interfaces involved, its use will provide significant cost savings through reduced operator time and maximum product reliability. In addition, user satisfaction is significantly improved and flight dynamics experts have more time to perform valuable analysis work. This paper will describe the evolution of the AutoProducts tool, highlighting the cost savings and customer satisfaction resulting from its development. It will also provide details about the tool including its graphical interface and operational capabilities.
The NIST Internet time service
NASA Astrophysics Data System (ADS)
Levine, Judah
1994-05-01
We will describe the NIST Network Time Service which provides time and frequency information over the Internet. Our first time server is located in Boulder, Colorado, a second backup server is under construction there, and we plan to install a third server on the East Coast later this year. The servers are synchronized to UTC(NIST) with an uncertainty of about 0.8 ms RMS and they will respond to time requests from any client on the Internet in several different formats including the DAYTIME, TIME and NTP protocols. The DAYTIME and TIME protocols are the easiest to use and are suitable for providing time to PC's and other small computers. In addition to UTC(NIST), the DAYTIME message provides advance notice of leap seconds and of the transitions to and from Daylight Saving Time. The Daylight Saving Time notice is based on the US transition dates of the first Sunday in April and the last one in October. The NTP is a more complex protocol that is suitable for larger machines; it is normally run as a 'daemon' process in the background and can keep the time of the client to within a few milliseconds of UTC(NIST). We will describe the operating principles of various kinds of client software ranging from a simple program that queries the server once and sets the local clock to more complex 'daemon' processes (such as NTP) that continuously correct the time of the local clock based on periodic calibrations.
The NIST Internet time service
NASA Technical Reports Server (NTRS)
Levine, Judah
1994-01-01
We will describe the NIST Network Time Service which provides time and frequency information over the Internet. Our first time server is located in Boulder, Colorado, a second backup server is under construction there, and we plan to install a third server on the East Coast later this year. The servers are synchronized to UTC(NIST) with an uncertainty of about 0.8 ms RMS and they will respond to time requests from any client on the Internet in several different formats including the DAYTIME, TIME and NTP protocols. The DAYTIME and TIME protocols are the easiest to use and are suitable for providing time to PC's and other small computers. In addition to UTC(NIST), the DAYTIME message provides advance notice of leap seconds and of the transitions to and from Daylight Saving Time. The Daylight Saving Time notice is based on the US transition dates of the first Sunday in April and the last one in October. The NTP is a more complex protocol that is suitable for larger machines; it is normally run as a 'daemon' process in the background and can keep the time of the client to within a few milliseconds of UTC(NIST). We will describe the operating principles of various kinds of client software ranging from a simple program that queries the server once and sets the local clock to more complex 'daemon' processes (such as NTP) that continuously correct the time of the local clock based on periodic calibrations.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Wang, Guanghui; Wu, Wells W; Zeng, Weihua; Chou, Chung-Lin; Shen, Rong-Fong
2006-05-01
A critical step in protein biomarker discovery is the ability to contrast proteomes, a process referred generally as quantitative proteomics. While stable-isotope labeling (e.g., ICAT, 18O- or 15N-labeling, or AQUA) remains the core technology used in mass spectrometry-based proteomic quantification, increasing efforts have been directed to the label-free approach that relies on direct comparison of peptide peak areas between LC-MS runs. This latter approach is attractive to investigators for its simplicity as well as cost effectiveness. In the present study, the reproducibility and linearity of using a label-free approach to highly complex proteomes were evaluated. Various amounts of proteins from different proteomes were subjected to repeated LC-MS analyses using an ion trap or Fourier transform mass spectrometer. Highly reproducible data were obtained between replicated runs, as evidenced by nearly ideal Pearson's correlation coefficients (for ion's peak areas or retention time) and average peak area ratios. In general, more than 50% and nearly 90% of the peptide ion ratios deviated less than 10% and 20%, respectively, from the average in duplicate runs. In addition, the multiplicity ratios of the amounts of proteins used correlated nicely with the observed averaged ratios of peak areas calculated from detected peptides. Furthermore, the removal of abundant proteins from the samples led to an improvement in reproducibility and linearity. A computer program has been written to automate the processing of data sets from experiments with groups of multiple samples for statistical analysis. Algorithms for outlier-resistant mean estimation and for adjusting statistical significance threshold in multiplicity of testing were incorporated to minimize the rate of false positives. The program was applied to quantify changes in proteomes of parental and p53-deficient HCT-116 human cells and found to yield reproducible results. Overall, this study demonstrates an alternative approach that allows global quantification of differentially expressed proteins in complex proteomes. The utility of this method to biomarker discovery is likely to synergize with future improvements in the detecting sensitivity of mass spectrometers.
Bergstra, S A; Kluitenberg, B; Dekker, R; Bredeweg, S W; Postema, K; Van den Heuvel, E R; Hijmans, J M; Sobhani, S
2015-07-01
Minimalist running shoes have been proposed as an alternative to barefoot running. However, several studies have reported cases of forefoot stress fractures after switching from standard to minimalist shoes. Therefore, the aim of the current study was to investigate the differences in plantar pressure in the forefoot region between running with a minimalist shoe and running with a standard shoe in healthy female runners during overground running. Randomized crossover design. In-shoe plantar pressure measurements were recorded from eighteen healthy female runners. Peak pressure, maximum mean pressure, pressure time integral and instant of peak pressure were assessed for seven foot areas. Force time integral, stride time, stance time, swing time, shoe comfort and landing type were assessed for both shoe types. A linear mixed model was used to analyze the data. Peak pressure and maximum mean pressure were higher in the medial forefoot (respectively 13.5% and 7.46%), central forefoot (respectively 37.5% and 29.2%) and lateral forefoot (respectively 37.9% and 20.4%) for the minimalist shoe condition. Stance time was reduced with 3.81%. No relevant differences in shoe comfort or landing strategy were found. Running with a minimalist shoe increased plantar pressure without a change in landing pattern. This increased pressure in the forefoot region might play a role in the occurrence of metatarsal stress fractures in runners who switched to minimalist shoes and warrants a cautious approach to transitioning to minimalist shoe use. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Billat, V L; Bocquet, V; Slawinski, J; Laffite, L; Demarle, A; Chassaing, P; Koralsztein, J P
2000-09-01
The purpose of this study was to examine the influence of prior intermittent running at VO2max on oxygen kinetics during a continuous severe intensity run and the time spent at VO2max. Eight long-distance runners performed three maximal tests on a synthetic track (400 m) whilst breathing through the COSMED K4 portable telemetric metabolic analyser: i) an incremental test which determined velocity at the lactate threshold (vLT), VO2max and velocity associated with VO2max (vVO2max), ii) a continuous severe intensity run at vLT+50% (vdelta50) of the difference between vLT and vVO2max (91.3+/-1.6% VO2max)preceded by a light continuous 20 minute run at 50% of vVO2max (light warm-up), iii) the same continuous severe intensity run at vdelta50 with a prior interval training exercise (hard warm-up) of repeated hard running bouts performed at 100% of vVO2max and light running at 50% of vVO2max (of 30 seconds each) performed until exhaustion (on average 19+/-5 min with 19+/-5 interval repetitions). This hard warm-up speeded the VO2 kinetics: the time constant was reduced by 45% (28+/-7 sec vs 51+/-37 sec) and the slow component of VO2 (deltaVO2 6-3 min) was deleted (-143+/-271 ml x min(-1) vs 291+/-153 ml x min(-1)). In conclusion, despite a significantly lower total run time at vdelta50 (6 min 19+/-0) min 17 vs 8 min 20+/-1 min 45, p=0.02) after the intermittent warm-up at VO2max, the time spent specifically at VO2max in the severe continuous run at vdelta50 was not significantly different.
Benn, Neil; Turlais, Fabrice; Clark, Victoria; Jones, Mike; Clulow, Stephen
2007-03-01
The authors describe a system for collecting usage metrics from widely distributed automation systems. An application that records and stores usage data centrally, calculates run times, and charts the data was developed. Data were collected over 20 months from at least 28 workstations. The application was used to plot bar charts of date versus run time for individual workstations, the automation in a specific laboratory, or automation of a specified type. The authors show that revised user training, redeployment of equipment, and running complimentary processes on one workstation can increase the average number of runs by up to 20-fold and run times by up to 450%. Active monitoring of usage leads to more effective use of automation. Usage data could be used to determine whether purchasing particular automation was a good investment.
Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr
2006-01-01
In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Time takes space: selective effects of multitasking on concurrent spatial processing.
Mäntylä, Timo; Coni, Valentina; Kubik, Veit; Todorov, Ivo; Del Missier, Fabio
2017-08-01
Many everyday activities require coordination and monitoring of complex relations of future goals and deadlines. Cognitive offloading may provide an efficient strategy for reducing control demands by representing future goals and deadlines as a pattern of spatial relations. We tested the hypothesis that multiple-task monitoring involves time-to-space transformational processes, and that these spatial effects are selective with greater demands on coordinate (metric) than categorical (nonmetric) spatial relation processing. Participants completed a multitasking session in which they monitored four series of deadlines, running on different time scales, while making concurrent coordinate or categorical spatial judgments. We expected and found that multitasking taxes concurrent coordinate, but not categorical, spatial processing. Furthermore, males showed a better multitasking performance than females. These findings provide novel experimental evidence for the hypothesis that efficient multitasking involves metric relational processing.
Reliability of Vibrating Mesh Technology.
Gowda, Ashwin A; Cuccia, Ann D; Smaldone, Gerald C
2017-01-01
For delivery of inhaled aerosols, vibrating mesh systems are more efficient than jet nebulizers are and do not require added gas flow. We assessed the reliability of a vibrating mesh nebulizer (Aerogen Solo, Aerogen Ltd, Galway Ireland) suitable for use in mechanical ventilation. An initial observational study was performed with 6 nebulizers to determine run time and efficiency using normal saline and distilled water. Nebulizers were run until cessation of aerosol production was noted, with residual volume and run time recorded. Three controllers were used to assess the impact of the controller on nebulizer function. Following the observational study, a more detailed experimental protocol was performed using 20 nebulizers. For this analysis, 2 controllers were used, and time to cessation of aerosol production was noted. Gravimetric techniques were used to measure residual volume. Total nebulization time and residual volume were recorded. Failure was defined as premature cessation of aerosol production represented by residual volume of > 10% of the nebulizer charge. In the initial observational protocol, an unexpected sporadic failure rate was noted of 25% in 55 experimental runs. In the experimental protocol, a failure rate was noted of 30% in 40 experimental runs. Failed runs in the experimental protocol exhibited a wide range of retained volume averaging ± SD 36 ± 21.3% compared with 3.2 ± 1.5% (P = .001) in successful runs. Small but significant differences existed in nebulization time between controllers. Aerogen Solo nebulization was often randomly interrupted with a wide range of retained volumes. Copyright © 2017 by Daedalus Enterprises.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
NASA Astrophysics Data System (ADS)
Avolio, G.; Corso Radu, A.; Kazarov, A.; Lehmann Miotto, G.; Magnoni, L.
2012-12-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment is a very complex distributed computing system, composed of more than 20000 applications running on more than 2000 computers. The TDAQ Controls system has to guarantee the smooth and synchronous operations of all the TDAQ components and has to provide the means to minimize the downtime of the system caused by runtime failures. During data taking runs, streams of information messages sent or published by running applications are the main sources of knowledge about correctness of running operations. The huge flow of operational monitoring data produced is constantly monitored by experts in order to detect problems or misbehaviours. Given the scale of the system and the rates of data to be analyzed, the automation of the system functionality in the areas of operational monitoring, system verification, error detection and recovery is a strong requirement. To accomplish its objective, the Controls system includes some high-level components which are based on advanced software technologies, namely the rule-based Expert System and the Complex Event Processing engines. The chosen techniques allow to formalize, store and reuse the knowledge of experts and thus to assist the shifters in the ATLAS control room during the data-taking activities.
Vernillo, Gianluca; Savoldelli, Aldo; Zignoli, Andrea; Trabucchi, Pietro; Pellegrini, Barbara; Millet, Grégoire P; Schena, Federico
2014-05-01
To examine the effects of the world's most challenging mountain ultra-marathon (Tor des Géants(®) 2012) on the energy cost of three types of locomotion (cycling, level and uphill running) and running kinematics. Before (pre-) and immediately after (post-) the competition, a group of ten male experienced ultra-marathon runners performed in random order three submaximal 4-min exercise trials: cycling at a power of 1.5 W kg(-1) body mass; level running at 9 km h(-1) and uphill running at 6 km h(-1) at an inclination of +15 % on a motorized treadmill. Two video cameras recorded running mechanics at different sampling rates. Between pre- and post-, the uphill-running energy cost decreased by 13.8 % (P = 0.004); no change was noted in the energy cost of level running or cycling (NS). There was an increase in contact time (+10.3 %, P = 0.019) and duty factor (+8.1 %, P = 0.001) and a decrease in swing time (-6.4 %, P = 0.008) in the uphill-running condition. After this extreme mountain ultra-marathon, the subjects modified only their uphill-running patterns for a more economical step mechanics.
A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.
Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco
2018-01-01
Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Sun, Xiaojun; Lin, Lei; Liu, Xinyue; Zhang, Fuming; Chi, Lianli; Xia, Qiangwei; Linhardt, Robert J
2016-02-02
Heparins, highly sulfated, linear polysaccharides also known as glycosaminoglycans, are among the most challenging biopolymers to analyze. Hyphenated techniques in conjunction with mass spectrometry (MS) offer rapid analysis of complex glycosaminoglycan mixtures, providing detailed structural and quantitative data. Previous analytical approaches have often relied on liquid chromatography (LC)-MS, and some have limitations including long separation times, low resolution of oligosaccharide mixtures, incompatibility of eluents, and often require oligosaccharide derivatization. This study examines the analysis of glycosaminoglycan oligosaccharides using a novel electrokinetic pump-based capillary electrophoresis (CE)-MS interface. CE separation and electrospray were optimized using a volatile ammonium bicarbonate electrolyte and a methanol-formic acid sheath fluid. The online analyses of highly sulfated heparin oligosaccharides, ranging from disaccharides to low molecular weight heparins, were performed within a 10 min time frame, offering an opportunity for higher-throughput analysis. Disaccharide compositional analysis as well as top-down analysis of low molecular weight heparin was demonstrated. Using normal polarity CE separation and positive-ion electrospray ionization MS, excellent run-to-run reproducibility (relative standard deviation of 3.6-5.1% for peak area and 0.2-0.4% for peak migration time) and sensitivity (limit of quantification of 2.0-5.9 ng/mL and limit of detection of 0.6-1.8 ng/mL) could be achieved.
Monte-Carlo methods make Dempster-Shafer formalism feasible
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Bernat, Andrew; Borrett, Walter; Mariscal, Yvonne; Villa, Elsa
1991-01-01
One of the main obstacles to the applications of Dempster-Shafer formalism is its computational complexity. If we combine m different pieces of knowledge, then in general case we have to perform up to 2(sup m) computational steps, which for large m is infeasible. For several important cases algorithms with smaller running time were proposed. We prove, however, that if we want to compute the belief bel(Q) in any given query Q, then exponential time is inevitable. It is still inevitable, if we want to compute bel(Q) with given precision epsilon. This restriction corresponds to the natural idea that since initial masses are known only approximately, there is no sense in trying to compute bel(Q) precisely. A further idea is that there is always some doubt in the whole knowledge, so there is always a probability p(sub o) that the expert's knowledge is wrong. In view of that it is sufficient to have an algorithm that gives a correct answer a probability greater than 1-p(sub o). If we use the original Dempster's combination rule, this possibility diminishes the running time, but still leaves the problem infeasible in the general case. We show that for the alternative combination rules proposed by Smets and Yager feasible methods exist. We also show how these methods can be parallelized, and what parallelization model fits this problem best.
Noack, Marko; Partzsch, Johannes; Mayr, Christian G.; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene
2015-01-01
Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm2 and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling. PMID:25698914
Agreement between VO[subscript 2peak] Predicted from PACER and One-Mile Run Time-Equated Laps
ERIC Educational Resources Information Center
Saint-Maurice, Pedro F.; Anderson, Katelin; Bai, Yang; Welk, Gregory J.
2016-01-01
Purpose: This study examined the agreement between estimated peak oxygen consumption (VO[subscript 2peak]) obtained from the Progressive Aerobic Cardiovascular Endurance Run (PACER) fitness test and equated PACER laps derived from One-Mile Run time (MR). Methods: A sample of 680 participants (324 boys and 356 girls) in Grades 7 through 12…
The Reliability of a 5km Run Test on a Motorized Treadmill
ERIC Educational Resources Information Center
Driller, Matthew; Brophy-Williams, Ned; Walker, Anthony
2017-01-01
The purpose of the present study was to determine the reliability of a 5km run test on a motorized treadmill. Over three consecutive weeks, 12 well-trained runners completed three 5km time trials on a treadmill following a standardized warm-up. Runners were partially-blinded to their running speed and distance covered. Total time to complete the…
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...
Critical Velocity Is Associated With Combat-Specific Performance Measures in a Special Forces Unit.
Hoffman, Mattan W; Stout, Jeffrey R; Hoffman, Jay R; Landua, Geva; Fukuda, David H; Sharvit, Nurit; Moran, Daniel S; Carmon, Erez; Ostfeld, Ishay
2016-02-01
The purpose of this study was to examine the relationship between critical velocity (CV) and anaerobic distance capacity (ADC) to combat-specific tasks (CST) in a special forces (SFs) unit. Eighteen male soldiers (mean ± SD; age: 19.9 ± 0.8 years; height: 177.6 ± 6.6 cm; body mass: 74.1 ± 5.8 kg; body mass index [BMI]: 23.52 ± 1.63) from an SF unit of the Israel Defense Forces volunteered to complete a 3-minute all-out run along with CST (2.5-km run, 50-m casualty carry, and 30-m repeated sprints with "rush" shooting [RPTDS]). Estimates of CV and ADC from the 3-minute all-out run were determined from data downloaded from a global position system device worn by each soldier, with CV calculated as the average velocity of the final 30 seconds of the run and ADC as the velocity-time integral above CV. Critical velocity exhibited significant negative correlations with the 2.5-km run time (r = -0.62, p < 0.01) and RPTDS time (r = -0.71, p < 0.01). In addition, CV was positively correlated with the average velocity during the 2.5-km run (r = 0.64, p < 0.01). Stepwise regression identified CV as the most significant performance measure associated with the 2.5-km run time, whereas BMI and CV measures were significant predictors of RPTDS time (R(2) = 0.67, p ≤ 0.05). Using the 3-minute all-out run as a testing measurement in combat, personnel may offer a more efficient and simpler way in assessing both aerobic and anaerobic capabilities (CV and ADC) within a relatively large sample.
Effect of metrology time delay on overlay APC
NASA Astrophysics Data System (ADS)
Carlson, Alan; DiBiase, Debra
2002-07-01
The run-to-run control strategy of lithography APC is primarily composed of a feedback loop as shown in the diagram below. It is known that the insertion of a time delay in a feedback loop can cause degradation in control performance and could even cause a stable system to become unstable, if the time delay becomes sufficiently large. Many proponents of integrated metrology methods have cited the damage caused by metrology time delays as the primary justification for moving from a stand-alone to integrated metrology. While there is little dispute over the qualitative form of this argument, there has been very light published about the quantitative effects under real fab conditions - precisely how much control is lost due to these time delays. Another issue regarding time delays is that the length of these delays is not typically fixed - they vary from lot to lot and in some cases this variance can be large - from one hour on the short side to over 32 hours on the long side. Concern has been expressed that the variability in metrology time delays can cause undesirable dynamics in feedback loops that make it difficult to optimize feedback filters and gains and at worst could drive a system unstable. By using data from numerous fabs, spanning many sizes and styles of operation, we have conducted a quantitative study of the time delay effect on overlay run- to-run control. Our analysis resulted in the following conclusions: (1) There is a significant and material relationship between metrology time delay and overlay control under a variety of real world production conditions. (2) The run-to-run controller can be configured to minimize sensitivity to time delay variations. (3) The value of moving to integrated metrology can be quantified.
Reachability Analysis in Probabilistic Biological Networks.
Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2015-01-01
Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
Figueiredo, Pedro; Marques, Elisa A; Lepers, Romuald
2016-09-01
Figueiredo, P, Marques, EA, and Lepers, R. Changes in contributions of swimming, cycling, and running performances on overall triathlon performance over a 26-year period. J Strength Cond Res 30(9): 2406-2415, 2016-This study examined the changes in the individual contribution of each discipline to the overall performance of Olympic and Ironman distance triathlons among men and women. Between 1989 and 2014, overall performances and their component disciplines (swimming, cycling and running) were analyzed from the top 50 overall male and female finishers. Regression analyses determined that for the Olympic distance, the split times in swimming and running decreased over the years (r = 0.25-0.43, p ≤ 0.05), whereas the cycling split and total time remained unchanged (p > 0.05), for both sexes. For the Ironman distance, the cycling and running splits and the total time decreased (r = 0.19-0.88, p ≤ 0.05), whereas swimming time remained stable, for both men and women. The average contribution of the swimming stage (∼18%) was smaller than the cycling and running stages (p ≤ 0.05), for both distances and both sexes. Running (∼47%) and then cycling (∼36%) had the greatest contribution to overall performance for the Olympic distance (∼47%), whereas for the Ironman distance, cycling and running presented similar contributions (∼40%, p > 0.05). Across the years, in the Olympic distance, swimming contribution significantly decreased for women and men (r = 0.51 and 0.68, p < 0.001, respectively), whereas running increased for men (r = 0.33, p = 0.014). In the Ironman distance, swimming and cycling contributions changed in an undulating fashion, being inverse between the two segments, for both sexes (p < 0.01), whereas running contribution decreased for men only (r = 0.61, p = 0.001). These findings highlight that strategies to improve running performance should be the main focus on the preparation to compete in the Olympic distance; whereas, in the Ironman, both cycling and running are decisive and should be well developed.
Ceballos-Villegas, Maria E.; Saldaña Mena, Juan J.; Gutierrez Lozano, Ana L.; Sepúlveda-Cañamar, Francisco J.; Huidobro, Nayeli; Manjarrez, Elias; Lomeli, Joel
2017-01-01
The Hoffmann reflex (H-wave) is produced by alpha-motoneuron activation in the spinal cord. A feature of this electromyography response is that it exhibits fluctuations in amplitude even during repetitive stimulation with the same intensity of current. We herein explore the hypothesis that physical training induces plastic changes in the motor system. Such changes are evaluated with the fractal dimension (FD) analysis of the H-wave amplitude-fluctuations (H-wave FD) and the cross-covariance (CCV) between the bilateral H-wave amplitudes. The aim of this study was to compare the H-wave FD as well as the CCV before and after track training in sedentary individuals and athletes. The training modality in all subjects consisted of running three times per week (for 13 weeks) in a concrete road of 5 km. Given the different physical condition of sedentary vs. athletes, the running time between sedentary and athletes was different. After training, the FD was significantly increased in sedentary individuals but significantly reduced in athletes, although there were no changes in spinal excitability in either group of subjects. Moreover, the CCV between bilateral H-waves exhibited a significant increase in athletes but not in sedentary individuals. These differential changes in the FD and CCV indicate that the plastic changes in the complexity of the H-wave amplitude fluctuations as well as the synaptic inputs to the Ia-motoneuron systems of both legs were correlated to the previous fitness history of the subjects. Furthermore, these findings demonstrate that the FD and CCV can be employed as indexes to study plastic changes in the human motor system. PMID:29163107
Lightweight fuzzy processes in clinical computing.
Hurdle, J F
1997-09-01
In spite of advances in computing hardware, many hospitals still have a hard time finding extra capacity in their production clinical information system to run artificial intelligence (AI) modules, for example: to support real-time drug-drug or drug-lab interactions; to track infection trends; to monitor compliance with case specific clinical guidelines; or to monitor/ control biomedical devices like an intelligent ventilator. Historically, adding AI functionality was not a major design concern when a typical clinical system is originally specified. AI technology is usually retrofitted 'on top of the old system' or 'run off line' in tandem with the old system to ensure that the routine work load would still get done (with as little impact from the AI side as possible). To compound the burden on system performance, most institutions have witnessed a long and increasing trend for intramural and extramural reporting, (e.g. the collection of data for a quality-control report in microbiology, or a meta-analysis of a suite of coronary artery bypass grafts techniques, etc.) and these place an ever-growing burden on typical the computer system's performance. We discuss a promising approach to adding extra AI processing power to a heavily-used system based on the notion 'lightweight fuzzy processing (LFP)', that is, fuzzy modules designed from the outset to impose a small computational load. A formal model for a useful subclass of fuzzy systems is defined below and is used as a framework for the automated generation of LFPs. By seeking to reduce the arithmetic complexity of the model (a hand-crafted process) and the data complexity of the model (an automated process), we show how LFPs can be generated for three sample datasets of clinical relevance.
Ceballos-Villegas, Maria E; Saldaña Mena, Juan J; Gutierrez Lozano, Ana L; Sepúlveda-Cañamar, Francisco J; Huidobro, Nayeli; Manjarrez, Elias; Lomeli, Joel
2017-01-01
The Hoffmann reflex (H-wave) is produced by alpha-motoneuron activation in the spinal cord. A feature of this electromyography response is that it exhibits fluctuations in amplitude even during repetitive stimulation with the same intensity of current. We herein explore the hypothesis that physical training induces plastic changes in the motor system. Such changes are evaluated with the fractal dimension (FD) analysis of the H-wave amplitude-fluctuations (H-wave FD) and the cross-covariance (CCV) between the bilateral H-wave amplitudes. The aim of this study was to compare the H-wave FD as well as the CCV before and after track training in sedentary individuals and athletes. The training modality in all subjects consisted of running three times per week (for 13 weeks) in a concrete road of 5 km. Given the different physical condition of sedentary vs. athletes, the running time between sedentary and athletes was different. After training, the FD was significantly increased in sedentary individuals but significantly reduced in athletes, although there were no changes in spinal excitability in either group of subjects. Moreover, the CCV between bilateral H-waves exhibited a significant increase in athletes but not in sedentary individuals. These differential changes in the FD and CCV indicate that the plastic changes in the complexity of the H-wave amplitude fluctuations as well as the synaptic inputs to the Ia-motoneuron systems of both legs were correlated to the previous fitness history of the subjects. Furthermore, these findings demonstrate that the FD and CCV can be employed as indexes to study plastic changes in the human motor system.
Sleep Consolidates Motor Learning of Complex Movement Sequences in Mice.
Nagai, Hirotaka; de Vivo, Luisa; Bellesi, Michele; Ghilardi, Maria Felice; Tononi, Giulio; Cirelli, Chiara
2017-02-01
Sleep-dependent consolidation of motor learning has been extensively studied in humans, but it remains unclear why some, but not all, learned skills benefit from sleep. Here, we compared 2 different motor tasks, both requiring the mice to run on an accelerating device. In the rotarod task, mice learn to maintain balance while running on a small rod, while in the complex wheel task, mice run on an accelerating wheel with an irregular rung pattern. In the rotarod task, performance improved to the same extent after sleep or after sleep deprivation (SD). Overall, using 7 different experimental protocols (41 sleep deprived mice, 26 sleeping controls), we found large interindividual differences in the learning and consolidation of the rotarod task, but sleep before/after training did not account for this variability. By contrast, using the complex wheel, we found that sleep after training, relative to SD, led to better performance from the beginning of the retest session, and longer sleep was correlated with greater subsequent performance. As in humans, the effects of sleep showed large interindividual variability and varied between fast and slow learners, with sleep favoring the preservation of learned skills in fast learners and leading to a net offline gain in the performance in slow learners. Using Fos expression as a proxy for neuronal activation, we also found that complex wheel training engaged motor cortex and hippocampus more than the rotarod training. Sleep specifically consolidates a motor skill that requires complex movement sequences and strongly engages both motor cortex and hippocampus. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Mann, Robert; Malisoux, Laurent; Brunner, Roman; Gette, Paul; Urhausen, Axel; Statham, Andrew; Meijer, Kenneth; Theisen, Daniel
2014-01-01
Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to test its validity regarding temporal parameters describing running gait. Furthermore, reliability of the Runalyser measurements was evaluated, as well as its ability to discriminate different running styles. Thirty-one healthy participants (30.3 ± 7.4 years, 1.78 ± 0.10 m and 74.1 ± 12.1 kg) were involved in the different study parts. Eleven participants were instructed to use a rearfoot (RFS), midfoot (MFS) and forefoot (FFS) strike pattern while running on a treadmill. Strike pattern was subsequently defined using a linear regression (R(2)=0.89) between foot strike angle, as determined by motion analysis (1000 Hz), and strike index (SI, point of contact on the foot sole, as a percentage of foot sole length), as measured by the Runalyser. MFS was defined by the 95% confidence interval of the intercept (SI=43.9-49.1%). High agreement (overall mean difference 1.2%) was found between stance time, flight time, stride time and duty factor as determined by the Runalyser and a force-measuring treadmill (n=16 participants). Measurements of the two devices were highly correlated (R ≥ 0.80) and not significantly different. Test-retest intra-class correlation coefficients for all parameters were ≥ 0.94 (n=14 participants). Significant differences (p<0.05) between FFS, RFS and habitual running were detected regarding SI, stance time and stride time (n=24 participants). The Runalyser is suitable for, and easily applicable in large-scale studies on running biomechanics. Copyright © 2013 Elsevier B.V. All rights reserved.
Transitionless driving on adiabatic search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907
We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less
The Validity and Reliability of an iPhone App for Measuring Running Mechanics.
Balsalobre-Fernández, Carlos; Agopyan, Hovannes; Morin, Jean-Benoit
2017-07-01
The purpose of this investigation was to analyze the validity of an iPhone application (Runmatic) for measuring running mechanics. To do this, 96 steps from 12 different runs at speeds ranging from 2.77-5.55 m·s -1 were recorded simultaneously with Runmatic, as well as with an opto-electronic device installed on a motorized treadmill to measure the contact and aerial time of each step. Additionally, several running mechanics variables were calculated using the contact and aerial times measured, and previously validated equations. Several statistics were computed to test the validity and reliability of Runmatic in comparison with the opto-electronic device for the measurement of contact time, aerial time, vertical oscillation, leg stiffness, maximum relative force, and step frequency. The running mechanics values obtained with both the app and the opto-electronic device showed a high degree of correlation (r = .94-.99, p < .001). Moreover, there was very close agreement between instruments as revealed by the ICC (2,1) (ICC = 0.965-0.991). Finally, both Runmatic and the opto-electronic device showed almost identical reliability levels when measuring each set of 8 steps for every run recorded. In conclusion, Runmatic has been proven to be a highly reliable tool for measuring the running mechanics studied in this work.
Gómez-Molina, Josué; Ogueta-Alday, Ana; Camara, Jesus; Stickley, Christopher; García-López, Juan
2018-03-01
Concurrent plyometric and running training has the potential to improve running economy (RE) and performance through increasing muscle strength and power, but the possible effect on spatiotemporal parameters of running has not been studied yet. The aim of this study was to compare the effect of 8 weeks of concurrent plyometric and running training on spatiotemporal parameters and physiological variables of novice runners. Twenty-five male participants were randomly assigned into two training groups; running group (RG) (n = 11) and running + plyometric group (RPG) (n = 14). Both groups performed 8 weeks of running training programme, and only the RPG performed a concurrent plyometric training programme (two sessions per week). Anthropometric, physiological (VO 2max , heart rate and RE) and spatiotemporal variables (contact and flight times, step rate and length) were registered before and after the intervention. In comparison to RG, the RPG reduced step rate and increased flight times at the same running speeds (P < .05) while contact times remained constant. Significant increases in pre- and post-training (P < .05) were found in RPG for squat jump and 5 bound test, while RG remained unchanged. Peak speed, ventilatory threshold (VT) speed and respiratory compensation threshold (RCT) speed increased (P < .05) for both groups, although peak speed and VO 2max increased more in the RPG than in the RG. In conclusion, concurrent plyometric and running training entails a reduction in step rate, as well as increases in VT speed, RCT speed, peak speed and VO 2max . Athletes could benefit from plyometric training in order to improve their strength, which would contribute to them attaining higher running speeds.
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
The Impact of a Food Elimination Diet on Collegiate Athletes' 300-meter Run Time and Concentration
Breshears, Karen; Baker, David McA.
2014-01-01
Background: Optimal human function and performance through diet strategies are critical for everyone but especially for those involved in collegiate or professional athletics. Currently, individualized medicine (IM) is emerging as a more efficacious approach to health with emphasis on personalized diet strategies for the public and is common practice for elite athletes. One method for directing patient-specific foods in the diet, while concomitantly impacting physical performance, may be via IgG food sensitivity and Candida albicans analysis from dried blood spot (DBS) collections. Methods: The authors designed a quasi-experimental, nonrandomized, pilot study without a control group. Twenty-three participants, 15 female, 8 male, from soccer/volleyball and football athletic teams, respectively, mean age 19.64+0.86 years, were recruited for the study, which examined preposttest 300-meter run times and questionnaire responses after a 14-day IgG DBS–directed food elimination diet based on IgG reactivity to 93 foods. DBS specimen collection, 300-meter run times, and Learning Difficulties Assessment (LDA) questionnaires were collected at the participants' university athletics building on campus. IgG, C albicans, and S cerevisiae analyses were conducted at the Great Plains Laboratory, Lenexa, Kansas. Results: Data indicated a change in 300-meter run time but not of statistical significance (run time baseline mean=50.41 sec, run time intervention mean=50.14 sec). Descriptive statistics for frequency of responses and chi-square analysis revealed that 4 of the 23 items selected from the LDA (Listening-Memory and Concentration subscale R=.8669; Listening-Information Processing subscale R=.8517; and General Concentration and Memory subscale R=.9019) were improved posttest. Conclusion: The study results did not indicate merit in eliminating foods based on IgG reactivity for affecting athletic performance (faster 300-meter run time) but did reveal potential for affecting academic qualities of listening, information processing, concentration, and memory. Further studies are warranted evaluating IgG-directed food elimination diets for improving run time, concentration, and memory among college athletes as well as among other populations. PMID:25568830
Dalgin, Rebecca Spirito; Dalgin, M Halim; Metzger, Scott J
2018-05-01
This article focuses on the impact of a peer run warm line as part of the psychiatric recovery process. It utilized data including the Recovery Assessment Scale, community integration measures and crisis service usage. Longitudinal statistical analysis was completed on 48 sets of data from 2011, 2012, and 2013. Although no statistically significant differences were observed for the RAS score, community integration data showed increases in visits to primary care doctors, leisure/recreation activities and socialization with others. This study highlights the complexity of psychiatric recovery and that nonclinical peer services like peer run warm lines may be critical to the process.
Changes in Running Mechanics During a 6-Hour Running Race.
Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano
2017-05-01
To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P < .05), while t c increased after 4 h 30 min of running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P < .05). Finally, SL decreased significantly (-5.1%, P = .010) during the last hour of the race. Most changes occurred after 4 h continuous self-paced running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc
2016-01-01
Objective: To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. Methods: 30 healthy male adults (18–31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7–154.6 ms] and T2* weighted (24 TEs: 4.6–53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product–moment correlation coefficients and confidence intervals were computed. Results: Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. Conclusion: A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. Advances in knowledge: This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults. PMID:27336705
Behzadi, Cyrus; Welsch, Goetz H; Laqmani, Azien; Henes, Frank O; Kaul, Michael G; Schoen, Gerhard; Adam, Gerhard; Regier, Marc
2016-08-01
To quantitatively assess the immediate effect of long-distance running on T2 and T2* relaxation times of the articular cartilage of the knee at 3.0 T in young healthy adults. 30 healthy male adults (18-31 years) who perform sports at an amateur level underwent an initial MRI at 3.0 T with T2 weighted [16 echo times (TEs): 9.7-154.6 ms] and T2* weighted (24 TEs: 4.6-53.6 ms) relaxation measurements. Thereafter, all participants performed a 45-min run. After the run, all individuals were immediately re-examined. Data sets were post-processed using dedicated software (ImageJ; National Institute of Health, Bethesda, MD). 22 regions of interest were manually drawn in segmented areas of the femoral, tibial and patellar cartilage. For statistical evaluation, Pearson product-moment correlation coefficients and confidence intervals were computed. Mean initial values were 35.7 ms for T2 and 25.1 ms for T2*. After the run, a significant decrease in the mean T2 and T2* relaxation times was observed for all segments in all participants. A mean decrease of relaxation time was observed for T2 with 4.6 ms (±3.6 ms) and for T2* with 3.6 ms (±5.1 ms) after running. A significant decrease could be observed in all cartilage segments for both biomarkers. Both quantitative techniques, T2 and T2*, seem to be valuable parameters in the evaluation of immediate changes in the cartilage ultrastructure after running. This is the first direct comparison of immediate changes in T2 and T2* relaxation times after running in healthy adults.
Barandun, Ursula; Knechtle, Beat; Knechtle, Patrizia; Klipstein, Andreas; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald
2012-01-01
Background Recent studies have shown that personal best marathon time is a strong predictor of race time in male ultramarathoners. We aimed to determine variables predictive of marathon race time in recreational male marathoners by using the same characteristics of anthropometry and training as used for ultramarathoners. Methods Anthropometric and training characteristics of 126 recreational male marathoners were bivariately and multivariately related to marathon race times. Results After multivariate regression, running speed of the training units (β = −0.52, P < 0.0001) and percent body fat (β = 0.27, P < 0.0001) were the two variables most strongly correlated with marathon race times. Marathon race time for recreational male runners may be estimated to some extent by using the following equation (r2 = 0.44): race time ( minutes) = 326.3 + 2.394 × (percent body fat, %) − 12.06 × (speed in training, km/hours). Running speed during training sessions correlated with prerace percent body fat (r = 0.33, P = 0.0002). The model including anthropometric and training variables explained 44% of the variance of marathon race times, whereas running speed during training sessions alone explained 40%. Thus, training speed was more predictive of marathon performance times than anthropometric characteristics. Conclusion The present results suggest that low body fat and running speed during training close to race pace (about 11 km/hour) are two key factors for a fast marathon race time in recreational male marathoner runners. PMID:24198587
Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.
Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C
2017-03-01
Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
Actual situation analyses of rat-run traffic on community streets based on car probe data
NASA Astrophysics Data System (ADS)
Sakuragi, Yuki; Matsuo, Kojiro; Sugiki, Nao
2017-10-01
Lowering of so-called "rat-run" traffic on community streets has been one of significant challenges for improving the living environment of neighborhood. However, it has been difficult to quantitatively grasp the actual situation of rat-run traffic by the traditional surveys such as point observations. This study aims to develop a method for extracting rat-run traffic based on car probe data. In addition, based on the extracted rat-run traffic in Toyohashi city, Japan, we try to analyze the actual situation such as time and location distribution of the rat-run traffic. As a result, in Toyohashi city, the rate of using rat-run route increases in peak time period. Focusing on the location distribution of rat-run traffic, in addition, they pass through a variety of community streets. There is no great inter-district bias of the route frequently used as rat-run traffic. Next, we focused on some trips passing through a heavily used route as rat-run traffic. As a result, we found the possibility that they habitually use the route as rat-run because their trips had some commonalities. We also found that they tend to use the rat-run route due to shorter distance than using the alternative highway route, and that the travel speeds were faster than using the alternative highway route. In conclusions, we confirmed that the proposed method can quantitatively grasp the actual situation and the phenomenal tendencies of the rat-run traffic.
Smythe, Gayle M; White, Jason D
2011-12-18
Voluntary wheel running can potentially be used to exacerbate the disease phenotype in dystrophin-deficient mdx mice. While it has been established that voluntary wheel running is highly variable between individuals, the key parameters of wheel running that impact the most on muscle pathology have not been examined in detail. We conducted a 2-week test of voluntary wheel running by mdx mice and the impact of wheel running on disease pathology. There was significant individual variation in the average daily distance (ranging from 0.003 ± 0.005 km to 4.48 ± 0.96 km), culminating in a wide range (0.040 km to 67.24 km) of total cumulative distances run by individuals. There was also variation in the number and length of run/rest cycles per night, and the average running rate. Correlation analyses demonstrated that in the quadriceps muscle, a low number of high distance run/rest cycles was the most consistent indicator for increased tissue damage. The amount of rest time between running bouts was a key factor associated with gastrocnemius damage. These data emphasize the need for detailed analysis of individual running performance, consideration of the length of wheel exposure time, and the selection of appropriate muscle groups for analysis, when applying the use of voluntary wheel running to disease exacerbation and/or pre-clinical testing of the efficacy of therapeutic agents in the mdx mouse.
Grid-based Meteorological and Crisis Applications
NASA Astrophysics Data System (ADS)
Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin
2010-05-01
We present several applications from domain of meteorology and crisis management we developed and/or plan to develop. Particularly, we present IMS Model Suite - a complex software system designed to address the needs of accurate forecast of weather and hazardous weather phenomena, environmental pollution assessment, prediction of consequences of nuclear accident and radiological emergency. We discuss requirements on computational means and our experiences how to meet them by grid computing. The process of a pollution assessment and prediction of the consequences in case of radiological emergence results in complex data-flows and work-flows among databases, models and simulation tools (geographical databases, meteorological and dispersion models, etc.). A pollution assessment and prediction requires running of 3D meteorological model (4 nests with resolution from 50 km to 1.8 km centered on nuclear power plant site, 38 vertical levels) as well as running of the dispersion model performing the simulation of the release transport and deposition of the pollutant with respect to the numeric weather prediction data, released material description, topography, land use description and user defined simulation scenario. Several post-processing options can be selected according to particular situation (e.g. doses calculation). Another example is a forecasting of fog as one of the meteorological phenomena hazardous to the aviation as well as road traffic. It requires complicated physical model and high resolution meteorological modeling due to its dependence on local conditions (precise topography, shorelines and land use classes). An installed fog modeling system requires a 4 time nested parallelized 3D meteorological model with 1.8 km horizontal resolution and 42 levels vertically (approx. 1 million points in 3D space) to be run four times daily. The 3D model outputs and multitude of local measurements are utilized by SPMD-parallelized 1D fog model run every hour. The fog forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.
Sex difference in top performers from Ironman to double deca iron ultra-triathlon
Knechtle, Beat; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A
2014-01-01
This study investigated changes in performance and sex difference in top performers for ultra-triathlon races held between 1978 and 2013 from Ironman (3.8 km swim, 180 km cycle, and 42 km run) to double deca iron ultra-triathlon distance (76 km swim, 3,600 km cycle, and 844 km run). The fastest men ever were faster than the fastest women ever for split and overall race times, with the exception of the swimming split in the quintuple iron ultra-triathlon (19 km swim, 900 km cycle, and 210.1 km run). Correlation analyses showed an increase in sex difference with increasing length of race distance for swimming (r2=0.67, P=0.023), running (r2=0.77, P=0.009), and overall race time (r2=0.77, P=0.0087), but not for cycling (r2=0.26, P=0.23). For the annual top performers, split and overall race times decreased across years nonlinearly in female and male Ironman triathletes. For longer distances, cycling split times decreased linearly in male triple iron ultra-triathletes, and running split times decreased linearly in male double iron ultra-triathletes but increased linearly in female triple and quintuple iron ultra-triathletes. Overall race times increased nonlinearly in female triple and male quintuple iron ultra-triathletes. The sex difference decreased nonlinearly in swimming, running, and overall race time in Ironman triathletes but increased linearly in cycling and running and nonlinearly in overall race time in triple iron ultra-triathletes. These findings suggest that women reduced the sex difference nonlinearly in shorter ultra-triathlon distances (ie, Ironman), but for longer distances than the Ironman, the sex difference increased or remained unchanged across years. It seems very unlikely that female top performers will ever outrun male top performers in ultratriathlons. The nonlinear change in speed and sex difference in Ironman triathlon suggests that female and male Ironman triathletes have reached their limits in performance. PMID:25114605
Shah, Rachit D; Cao, Alex; Golenberg, Lavie; Ellis, R Darin; Auner, Gregory W; Pandya, Abhilash K; Klein, Michael D
2009-04-01
Technical advances in the application of laparoscopic and robotic surgical systems have improved platform usability. The authors hypothesized that using two monitors instead of one would lead to faster performance with fewer errors. All tasks were performed using a surgical robot in a training box. One of the monitors was a standard camera with two preset zoom levels (zoomed in and zoomed out, single-monitor condition). The second monitor provided a static panoramic view of the whole surgical field. The standard camera was static at the zoomed-in level for the dual-monitor condition of the study. The study had two groups of participants: 4 surgeons proficient in both robotic and advanced laparoscopic skills and 10 lay persons (nonsurgeons) who were given adequate time to train and familiarize themselves with the equipment. Running a 50-cm rope was the basic task. Advanced tasks included running a suture through predetermined points and intracorporeal knot tying with 3-0 silk. Trial completion times and errors, categorized into three groups (orientation, precision, and task), were recorded. The trial completion times for all the tasks, basic and advanced, in the two groups were not significantly different. Fewer orientation errors occurred in the nonsurgeon group during knot tying (p=0.03) and in both groups during suturing (p=0.0002) in the dual-monitor arm of the study. Differences in precision and task error were not significant. Using two camera views helps both surgeons and lay persons perform complex tasks with fewer errors. These results may be due to better awareness of the surgical field with regard to the location of the instruments, leading to better field orientation. This display setup has potential for use in complex minimally invasive surgeries such as esophagectomy and gastric bypass. This technique also would be applicable to open microsurgery.
Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments
Ambroise, Matthieu; Levi, Timothée; Joucla, Sébastien; Yvert, Blaise; Saïghi, Sylvain
2013-01-01
This investigation of the leech heartbeat neural network system led to the development of a low resources, real-time, biomimetic digital hardware for use in hybrid experiments. The leech heartbeat neural network is one of the simplest central pattern generators (CPG). In biology, CPG provide the rhythmic bursts of spikes that form the basis for all muscle contraction orders (heartbeat) and locomotion (walking, running, etc.). The leech neural network system was previously investigated and this CPG formalized in the Hodgkin–Huxley neural model (HH), the most complex devised to date. However, the resources required for a neural model are proportional to its complexity. In response to this issue, this article describes a biomimetic implementation of a network of 240 CPGs in an FPGA (Field Programmable Gate Array), using a simple model (Izhikevich) and proposes a new synapse model: activity-dependent depression synapse. The network implementation architecture operates on a single computation core. This digital system works in real-time, requires few resources, and has the same bursting activity behavior as the complex model. The implementation of this CPG was initially validated by comparing it with a simulation of the complex model. Its activity was then matched with pharmacological data from the rat spinal cord activity. This digital system opens the way for future hybrid experiments and represents an important step toward hybridization of biological tissue and artificial neural networks. This CPG network is also likely to be useful for mimicking the locomotion activity of various animals and developing hybrid experiments for neuroprosthesis development. PMID:24319408
Performance analysis of local area networks
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.; Hall, Mary Grace
1990-01-01
A simulation of the TCP/IP protocol running on a CSMA/CD data link layer was described. The simulation was implemented using the simula language, and object oriented discrete event language. It allows the user to set the number of stations at run time, as well as some station parameters. Those parameters are the interrupt time and the dma transfer rate for each station. In addition, the user may configure the network at run time with stations of differing characteristics. Two types are available, and the parameters of both types are read from input files at run time. The parameters include the dma transfer rate, interrupt time, data rate, average message size, maximum frame size and the average interarrival time of messages per station. The information collected for the network is the throughput and the mean delay per packet. For each station, the number of messages attempted as well as the number of messages successfully transmitted is collected in addition to the throughput and mean packet delay per station.
Compression socks and functional recovery following marathon running: a randomized controlled trial.
Armstrong, Stuart A; Till, Eloise S; Maloney, Stephen R; Harris, Gregory A
2015-02-01
Compression socks have become a popular recovery aid for distance running athletes. Although some physiological markers have been shown to be influenced by wearing these garments, scant evidence exists on their effects on functional recovery. This research aims to shed light onto whether the wearing of compression socks for 48 hours after marathon running can improve functional recovery, as measured by a timed treadmill test to exhaustion 14 days following marathon running. Athletes (n = 33, age, 38.5 ± 7.2 years) participating in the 2012 Melbourne, 2013 Canberra, or 2013 Gold Coast marathons were recruited and randomized into the compression sock or placebo group. A graded treadmill test to exhaustion was performed 2 weeks before and 2 weeks after each marathon. Time to exhaustion, average and maximum heart rates were recorded. Participants were asked to wear their socks for 48 hours immediately after completion of the marathon. The change in treadmill times (seconds) was recorded for each participant. Thirty-three participants completed the treadmill protocols. In the compression group, average treadmill run to exhaustion time 2 weeks after the marathon increased by 2.6% (52 ± 103 seconds). In the placebo group, run to exhaustion time decreased by 3.4% (-62 ± 130 seconds), P = 0.009. This shows a significant beneficial effect of compression socks on recovery compared with placebo. The wearing of below-knee compression socks for 48 hours after marathon running has been shown to improve functional recovery as measured by a graduated treadmill test to exhaustion 2 weeks after the event.
Topology Optimization for Reducing Additive Manufacturing Processing Distortions
2017-12-01
features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a
1986-11-01
Report Organization. .................... 7 *PART 11: CASE STUDIES .......................... 9 Teton Dam Failure Flood. ...................... 9...channel, (3) Laurel Run Dam , and (4) Stillhouse Hollow Dam . The Laurel Run and Teton case studies involved field data sets from actual dam failures. The...hypothetical prismatic channel case study used the Teton reservoir and dam data but replaced the complex Teton Valley geometry with a prismatic channel
Prediction of half-marathon race time in recreational female and male runners.
Knechtle, Beat; Barandun, Ursula; Knechtle, Patrizia; Zingg, Matthias A; Rosemann, Thomas; Rüst, Christoph A
2014-01-01
Half-marathon running is of high popularity. Recent studies tried to find predictor variables for half-marathon race time for recreational female and male runners and to present equations to predict race time. The actual equations included running speed during training for both women and men as training variable but midaxillary skinfold for women and body mass index for men as anthropometric variable. An actual study found that percent body fat and running speed during training sessions were the best predictor variables for half-marathon race times in both women and men. The aim of the present study was to improve the existing equations to predict half-marathon race time in a larger sample of male and female half-marathoners by using percent body fat and running speed during training sessions as predictor variables. In a sample of 147 men and 83 women, multiple linear regression analysis including percent body fat and running speed during training units as independent variables and race time as dependent variable were performed and an equation was evolved to predict half-marathon race time. For men, half-marathon race time might be predicted by the equation (r(2) = 0.42, adjusted r(2) = 0.41, SE = 13.3) half-marathon race time (min) = 142.7 + 1.158 × percent body fat (%) - 5.223 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.71, p < 0.0001) to the achieved race time. For women, half-marathon race time might be predicted by the equation (r(2) = 0.68, adjusted r(2) = 0.68, SE = 9.8) race time (min) = 168.7 + 1.077 × percent body fat (%) - 7.556 × running speed during training (km/h). The predicted race time correlated highly significantly (r = 0.89, p < 0.0001) to the achieved race time. The coefficients of determination of the models were slightly higher than for the existing equations. Future studies might include physiological variables to increase the coefficients of determination of the models.
Tsai, Sheng-Feng; Ku, Nai-Wen; Wang, Tzu-Feng; Yang, Yan-Hsiang; Shih, Yao-Hsiang; Wu, Shih-Ying; Lee, Chu-Wan; Yu, Megan; Yang, Ting-Ting; Kuo, Yu-Min
2018-05-07
Aging impairs hippocampal neuroplasticity and hippocampus-related learning and memory. In contrast, exercise training is known to improve hippocampal neuronal function. However, whether exercise is capable of restoring memory function in old animals is less clear. Here, we investigated the effects of exercise on the hippocampal neuroplasticity and memory functions during aging. Young (3 months), middle-aged (9-12 months), and old (18 months) mice underwent moderate-intensity treadmill running training for 6 weeks, and their hippocampus-related learning and memory, and the plasticity of their CA1 neurons was evaluated. The memory performance (Morris water maze and novel object recognition tests), and dendritic complexity (branch and length) and spine density of their hippocampal CA1 neurons decreased as their age increased. The induction and maintenance of high-frequency stimulation-induced long-term potentiation in the CA1 area and the expressions of neuroplasticity-related proteins were not affected by age. Treadmill running increased CA1 neuron long-term potentiation and dendritic complexity in all three age groups, and it restored the learning and memory ability in middle-aged and old mice. Furthermore, treadmill running upregulated the hippocampal expressions of brain-derived neurotrophic factor and monocarboxylate transporter-4 in middle-aged mice, glutamine synthetase in old mice, and full-length TrkB in middle-aged and old mice. The hippocampus-related memory function declines from middle age, but long-term moderate-intensity running effectively increased hippocampal neuroplasticity and memory in mice of different ages, even when the memory impairment had progressed to an advanced stage. Thus, long-term, moderate intensity exercise training might be a way of delaying and treating aging-related memory decline. © 2018 S. Karger AG, Basel.
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard
2017-11-06
Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.
Shallow-Water Nitrox Diving, the NASA Experience
NASA Technical Reports Server (NTRS)
Fitzpatrick, Daniel T.
2009-01-01
NASA s Neutral Buoyancy Laboratory (NBL) contains a 6.2 million gallon, 12-meter deep pool where astronauts prepare for space missions involving space walks (extravehicular activity EVA). Training is conducted in a space suit (extravehicular mobility unit EMU) pressurized to 4.0 - 4.3 PSI for up to 6.5 hours while breathing a 46% NITROX mix. Since the facility opened in 1997, over 30,000 hours of suited training has been completed with no occurrence of decompression sickness (DCS) or oxygen toxicity. This study examines the last 5 years of astronaut suited training runs. All suited runs are computer monitored and data is recorded in the Environmental Control System (ECS) database. Astronaut training runs from 2004 - 2008 were reviewed and specific data including total run time, maximum depth and average depth were analyzed. One hundred twenty seven astronauts and cosmonauts completed 2,231 training runs totaling 12,880 exposure hours. Data was available for 96% of the runs. It was revealed that the suit configuration produces a maximum equivalent air depth of 7 meters, essentially eliminating the risk of DCS. Based on average run depth and time, approximately 17% of the training runs exceeded the NOAA oxygen maximum single exposure limits, with no resulting oxygen toxicity. The NBL suited training protocols are safe and time tested. Consideration should be given to reevaluate the NOAA oxygen exposure limits for PO2 levels at or below 1 ATA.
Basso, Julia C; Morrell, Joan I
2017-10-01
Though voluntary wheel running (VWR) has been used extensively to induce changes in both behavior and biology, little attention has been given to the way in which different variables influence VWR. This lack of understanding has led to an inability to utilize this behavior to its full potential, possibly blunting its effects on the endpoints of interest. We tested how running experience, sex, gonadal hormones, and wheel apparatus influence VWR in a range of wheel access "doses". VWR increases over several weeks, with females eventually running 1.5 times farther and faster than males. Limiting wheel access can be used as a tool to motivate subjects to run but restricts maximal running speeds attained by the rodents. Additionally, circulating gonadal hormones regulate wheel running behavior, but are not the sole basis of sex differences in running. Limitations from previous studies include the predominate use of males, emphasis on distance run, variable amounts of wheel availability, variable light-dark cycles, and possible food and/or water deprivation. We designed a comprehensive set of experiments to address these inconsistencies, providing data regarding the "microfeatures" of running, including distance run, time spent running, running rate, bouting behavior, and daily running patterns. By systematically altering wheel access, VWR behavior can be finely tuned - a feature that we hypothesize is due to its positive incentive salience. We demonstrate how to maximize VWR, which will allow investigators to optimize exercise-induced changes in their behavioral and/or biological endpoints of interest. Published by Elsevier B.V.
RNA motif search with data-driven element ordering.
Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa
2016-05-18
In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .
Multicore Architecture-aware Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasa, Avinash
Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a largemore » scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.« less
SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.
Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue
2018-05-02
Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.
Gray: a ray tracing-based Monte Carlo simulator for PET
NASA Astrophysics Data System (ADS)
Freese, David L.; Olcott, Peter D.; Buss, Samuel R.; Levin, Craig S.
2018-05-01
Monte Carlo simulation software plays a critical role in PET system design. Performing complex, repeated Monte Carlo simulations can be computationally prohibitive, as even a single simulation can require a large amount of time and a computing cluster to complete. Here we introduce Gray, a Monte Carlo simulation software for PET systems. Gray exploits ray tracing methods used in the computer graphics community to greatly accelerate simulations of PET systems with complex geometries. We demonstrate the implementation of models for positron range, annihilation acolinearity, photoelectric absorption, Compton scatter, and Rayleigh scatter. For validation, we simulate the GATE PET benchmark, and compare energy, distribution of hits, coincidences, and run time. We show a speedup using Gray, compared to GATE for the same simulation, while demonstrating nearly identical results. We additionally simulate the Siemens Biograph mCT system with both the NEMA NU-2 scatter phantom and sensitivity phantom. We estimate the total sensitivity within % when accounting for differences in peak NECR. We also estimate the peak NECR to be kcps, or within % of published experimental data. The activity concentration of the peak is also estimated within 1.3%.
Orbital Architectures of Dynamically Complex Exoplanet Systems
NASA Astrophysics Data System (ADS)
Nelson, Benjamin E.
2015-01-01
The most powerful constraints on planet formation will come from characterizing the dynamical state of complex multi-planet systems. Unfortunately, with that complexity comes a number of factors that make analyzing these systems a computationally challenging endeavor: the sheer number of model parameters, a wonky shaped posterior distribution, and hundreds to thousands of time series measurements. We develop a differential evolution Markov chain Monte Carlo (RUN DMC) to tackle these difficult aspects of data analysis. We apply RUN DMC to two classic multi-planet systems from radial velocity surveys, 55 Cancri and GJ 876. For 55 Cancri, we find the inner-most planet "e" must be coplanar to within 40 degrees of the outer planets, otherwise Kozai-like perturbations will cause the planet's orbit to cross the stellar surface. We find the orbits of planets "b" and "c" are apsidally aligned and librating with low to median amplitude (50±610 degrees), but they are not orbiting in a mean-motion resonance. For GJ 876, we can meaningfully constrain the three-dimensional orbital architecture of all the planets based on the radial velocity data alone. By demanding orbital stability, we find the resonant planets have low mutual inclinations (Φ) so they must be roughly coplanar (Φcb = 1.41±0.620.57 degrees and Φbe = 3.87±1.991.86 degrees). The three-dimensional Laplace argument librates with an amplitude of 50.5±7.910.0 degrees, indicating significant past disk migration and ensuring long-term stability. These empirically derived models will provide new challenges for planet formation models and motivate the need for more sophisticated algorithms to analyze exoplanet data.
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Impact of water quality on chlorine demand of corroding copper
Copper is widely used in drinking water premise plumbing system materials. In buildings such ashospitals, large and complicated plumbing networks make it difficult to maintain good water quality.Sustaining safe disinfectant residuals throughout a building to protect against waterborne pathogenssuch as Legionella is particularly challenging since copper and other reactive distribution system materialscan exert considerable demands. The objective of this work was to evaluate the impact of pH andorthophosphate on the consumption of free chlorine associated with corroding copper pipes over time. Acopper test-loop pilot system was used to control test conditions and systematically meet the studyobjectives. Chlorine consumption trends attributed to abiotic reactions with copper over time weredifferent for each pH condition tested, and the total amount of chlorine consumed over the test runsincreased with increasing pH. Orthophosphate eliminated chlorine consumption trends with elapsedtime (i.e., chlorine demand was consistent across entire test runs). Orthophosphate also greatly reducedthe total amount of chlorine consumed over the test runs. Interestingly, the total amount of chlorineconsumed and the consumption rate were not pH dependent when orthophosphate was present. Thefindings reflect the complex and competing reactions at the copper pipe wall including corrosion,oxidation of Cu(I) minerals and ions, and possible oxidation of Cu(II) minerals, and the change in
Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks
Akram, Vahid Khalilpour; Dagdeviren, Orhan
2013-01-01
Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930
Clustering Millions of Faces by Identity.
Otto, Charles; Wang, Dayong; Jain, Anil K
2018-02-01
Given a large collection of unlabeled face images, we address the problem of clustering faces into an unknown number of identities. This problem is of interest in social media, law enforcement, and other applications, where the number of faces can be of the order of hundreds of million, while the number of identities (clusters) can range from a few thousand to millions. To address the challenges of run-time complexity and cluster quality, we present an approximate Rank-Order clustering algorithm that performs better than popular clustering algorithms (k-Means and Spectral). Our experiments include clustering up to 123 million face images into over 10 million clusters. Clustering results are analyzed in terms of external (known face labels) and internal (unknown face labels) quality measures, and run-time. Our algorithm achieves an F-measure of 0.87 on the LFW benchmark (13 K faces of 5,749 individuals), which drops to 0.27 on the largest dataset considered (13 K faces in LFW + 123M distractor images). Additionally, we show that frames in the YouTube benchmark can be clustered with an F-measure of 0.71. An internal per-cluster quality measure is developed to rank individual clusters for manual exploration of high quality clusters that are compact and isolated.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
Wheel running decreases palatable diet preference in Sprague-Dawley rats.
Moody, Laura; Liang, Joy; Choi, Pique P; Moran, Timothy H; Liang, Nu-Chu
2015-10-15
Physical activity has beneficial effects on not only improving some disease conditions but also by preventing the development of multiple disorders. Experiments in this study examined the effects of wheel running on intakes of chow and palatable diet e.g. high fat (HF) or high sucrose (HS) diet in male and female Sprague-Dawley rats. Experiment 1 demonstrated that acute wheel running results in robust HF or HS diet avoidance in male rats. Although female rats with running wheel access initially showed complete avoidance of the two palatable diets, the avoidance of the HS diet was transient. Experiment 2 demonstrated that male rats developed decreased HF diet preferences regardless of the order of diet and wheel running access presentation. Running associated changes in HF diet preference in females, on the other hand, depended on the testing schedule. In female rats, simultaneous presentation of the HF diet and running access resulted in transient complete HF diet avoidance whereas running experience prior to HF diet access did not affect the high preference for the HF diet. Ovariectomy in females resulted in HF diet preference patterns that were similar to those in male rats during simultaneous exposure of HF and wheel running access but similar to intact females when running occurred before HF exposure. Overall, the results demonstrated wheel running associated changes in palatable diet preferences that were in part sex dependent. Furthermore, ovarian hormones play a role in some of the sex differences. These data reveal complexity in the mechanisms underlying exercise associated changes in palatable diet preference. Published by Elsevier Inc.
Wheel running decreases palatable diet preference in Sprague-Dawley rats
Moody, Laura; Liang, Joy; Choi, Pique P.; Moran, Timothy H.; Liang, Nu-Chu
2015-01-01
Physical activity has beneficial effects on not only improving some disease conditions but also by preventing the development of multiple disorders. Experiments in this study examined the effects of wheel running on intakes of chow and palatable diet e.g. high fat (HF) or high sucrose (HS) diet in male and female Sprague Dawley rats. Experiment 1 demonstrated that acute wheel running results in robust HF or HS diet avoidance in male rats. Although female rats with running wheel access initially showed complete avoidance of the two palatable diets, the avoidance of the HS diet was transient. Experiment 2 demonstrated that male rats developed decreased HF diet preferences regardless of the order of diet and wheel running access presentation. Running associated changes in HF diet preference in females, on the other hand, depended on the testing schedule. In female rats, simultaneous presentation of the HF diet and running access resulted in transient complete HF diet avoidance whereas running experience prior to HF diet access did not affect the high preference for the HF diet. Ovariectomy in females resulted in HF diet preference patterns that were similar to those in male rats during simultaneous exposure of HF and wheel running access but similar to intact females when running occurred before HF exposure. Overall, the results demonstrated wheel running associated changes in palatable diet preferences that were in part sex dependent. Furthermore, ovarian hormones play a role in some of the sex differences. These data reveal complexity in the mechanisms underlying exercise associated changes in palatable diet preference. PMID:25791204
Development and testing of a new system for assessing wheel-running behaviour in rodents.
Chomiak, Taylor; Block, Edward W; Brown, Andrew R; Teskey, G Campbell; Hu, Bin
2016-05-05
Wheel running is one of the most widely studied behaviours in laboratory rodents. As a result, improved approaches for the objective monitoring and gathering of more detailed information is increasingly becoming important for evaluating rodent wheel-running behaviour. Here our aim was to develop a new quantitative wheel-running system that can be used for most typical wheel-running experimental protocols. Here we devise a system that can provide a continuous waveform amenable to real-time integration with a high-speed video ideal for wheel-running experimental protocols. While quantification of wheel running behaviour has typically focused on the number of revolutions per unit time as an end point measure, the approach described here allows for more detailed information like wheel rotation fluidity, directionality, instantaneous velocity, and acceleration, in addition to total number of rotations, and the temporal pattern of wheel-running behaviour to be derived from a single trace. We further tested this system with a running-wheel behavioural paradigm that can be used for investigating the neuronal mechanisms of procedural learning and postural stability, and discuss other potentially useful applications. This system and its ability to evaluate multiple wheel-running parameters may become a useful tool for screening new potentially important therapeutic compounds related to many neurological conditions.
(Quickly) Testing the Tester via Path Coverage
NASA Technical Reports Server (NTRS)
Groce, Alex
2009-01-01
The configuration complexity and code size of an automated testing framework may grow to a point that the tester itself becomes a significant software artifact, prone to poor configuration and implementation errors. Unfortunately, testing the tester by using old versions of the software under test (SUT) may be impractical or impossible: test framework changes may have been motivated by interface changes in the tested system, or fault detection may become too expensive in terms of computing time to justify running until errors are detected on older versions of the software. We propose the use of path coverage measures as a "quick and dirty" method for detecting many faults in complex test frameworks. We also note the possibility of using techniques developed to diversify state-space searches in model checking to diversify test focus, and an associated classification of tester changes into focus-changing and non-focus-changing modifications.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems
NASA Astrophysics Data System (ADS)
Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong
Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.
NASA Astrophysics Data System (ADS)
Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan
Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.