Towards Run-time Assurance of Advanced Propulsion Algorithms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy
2014-01-01
This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.
An enhanced Ada run-time system for real-time embedded processors
NASA Technical Reports Server (NTRS)
Sims, J. T.
1991-01-01
An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.
Implementation of an Intelligent Control System
1992-05-01
there- fore implemented in a portable equipment rack. The controls computer consists of a microcomputer running a real time operating system , interface...circuit boards are mounted in an industry standard Multibus I chassis. The microcomputer runs the iRMX real time operating system . This operating system
X-LUNA: Extending Free/Open Source Real Time Executive for On-Board Space Applications
NASA Astrophysics Data System (ADS)
Braga, P.; Henriques, L.; Zulianello, M.
2008-08-01
In this paper we present xLuna, a system based on the RTEMS [1] Real-Time Operating System that is able to run on demand a GNU/Linux Operating System [2] as RTEMS' lowest priority task. Linux runs in user-mode and in a different memory partition. This allows running Hard Real-Time tasks and Linux applications on the same system sharing the Hardware resources while keeping a safe isolation and the Real-Time characteristics of RTEMS. Communication between both Systems is possible through a loose coupled mechanism based on message queues. Currently only SPARC LEON2 processor with Memory Management Unit (MMU) is supported. The advantage in having two isolated systems is that non critical components are quickly developed or simply ported reducing time-to-market and budget.
Benn, Neil; Turlais, Fabrice; Clark, Victoria; Jones, Mike; Clulow, Stephen
2007-03-01
The authors describe a system for collecting usage metrics from widely distributed automation systems. An application that records and stores usage data centrally, calculates run times, and charts the data was developed. Data were collected over 20 months from at least 28 workstations. The application was used to plot bar charts of date versus run time for individual workstations, the automation in a specific laboratory, or automation of a specified type. The authors show that revised user training, redeployment of equipment, and running complimentary processes on one workstation can increase the average number of runs by up to 20-fold and run times by up to 450%. Active monitoring of usage leads to more effective use of automation. Usage data could be used to determine whether purchasing particular automation was a good investment.
Development and testing of a new system for assessing wheel-running behaviour in rodents.
Chomiak, Taylor; Block, Edward W; Brown, Andrew R; Teskey, G Campbell; Hu, Bin
2016-05-05
Wheel running is one of the most widely studied behaviours in laboratory rodents. As a result, improved approaches for the objective monitoring and gathering of more detailed information is increasingly becoming important for evaluating rodent wheel-running behaviour. Here our aim was to develop a new quantitative wheel-running system that can be used for most typical wheel-running experimental protocols. Here we devise a system that can provide a continuous waveform amenable to real-time integration with a high-speed video ideal for wheel-running experimental protocols. While quantification of wheel running behaviour has typically focused on the number of revolutions per unit time as an end point measure, the approach described here allows for more detailed information like wheel rotation fluidity, directionality, instantaneous velocity, and acceleration, in addition to total number of rotations, and the temporal pattern of wheel-running behaviour to be derived from a single trace. We further tested this system with a running-wheel behavioural paradigm that can be used for investigating the neuronal mechanisms of procedural learning and postural stability, and discuss other potentially useful applications. This system and its ability to evaluate multiple wheel-running parameters may become a useful tool for screening new potentially important therapeutic compounds related to many neurological conditions.
On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System
NASA Technical Reports Server (NTRS)
Boyer, R. S.; Moore, J. S.
1983-01-01
The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.
Williams, James A; Eddleman, Laura; Pantone, Amy; Martinez, Regina; Young, Stephen; Van Der Pol, Barbara
2014-08-01
Next-generation diagnostics for Chlamydia trachomatis and Neisseria gonorrhoeae are available on semi- or fully-automated platforms. These systems require less hands-on time than older platforms and are user friendly. Four automated systems, the ABBOTT m2000 system, Becton Dickinson Viper System with XTR Technology, Gen-Probe Tigris DTS system, and Roche cobas 4800 system, were evaluated for total run time, hands-on time, and walk-away time. All of the systems evaluated in this time-motion study were able to complete a diagnostic test run within an 8-h work shift, instrument setup and operation were straightforward and uncomplicated, and walk-away time ranged from approximately 90 to 270 min in a head-to-head comparison of each system. All of the automated systems provide technical staff with increased time to perform other tasks during the run, offer easy expansion of the diagnostic test menu, and have the ability to increase specimen throughput. © 2013 Society for Laboratory Automation and Screening.
Operating system for a real-time multiprocessor propulsion system simulator
NASA Technical Reports Server (NTRS)
Cole, G. L.
1984-01-01
The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.
Instrument front-ends at Fermilab during Run II
NASA Astrophysics Data System (ADS)
Meyer, T.; Slimmer, D.; Voy, D.
2011-11-01
The optimization of an accelerator relies on the ability to monitor the behavior of the beam in an intelligent and timely fashion. The use of processor-driven front-ends allowed for the deployment of smart systems in the field for improved data collection and analysis during Run II. This paper describes the implementation of the two main systems used: National Instruments LabVIEW running on PCs, and WindRiver's VxWorks real-time operating system running in a VME crate processor. Work supported by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.; Hankins, Walter W., III; Barker, L. Keith
2001-01-01
This report examines a rollout and turnoff (ROTO) system for reducing the runway occupancy time for transport aircraft in low-visibility weather. Simulator runs were made to evaluate the system that includes a head-up display (HUD) to show the pilot a graphical overlay of the runway along with guidance and steering information to a chosen exit. Fourteen pilots (airline, corporate jet, and research pilots) collectively flew a total of 560 rollout and turnoff runs using all eight runways at Hartsfield Atlanta International Airport. The runs consisted of 280 runs for each of two runway visual ranges (RVRs) (300 and 1200 ft). For each visual range, half the runs were conducted with the HUD information and half without. For the runs conducted with the HUD information, the runway occupancy times were lower and more consistent. The effect was more pronounced as visibility decreased. For the 1200-ft visibility, the runway occupancy times were 13% lower with HUD information (46.1 versus 52.8 sec). Similarly, for the 300-ft visibility, the times were 28% lower (45.4 versus 63.0 sec). Also, for the runs with HUD information, 78% (RVR 1200) and 75% (RVR 300) had runway occupancy times less than 50 sec, versus 41 and 20%, respectively, without HUD information.
A Compiler and Run-time System for Network Programming Languages
2012-01-01
A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A
NASA Technical Reports Server (NTRS)
Springer, P.
1993-01-01
This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.
Toward real-time performance benchmarks for Ada
NASA Technical Reports Server (NTRS)
Clapp, Russell M.; Duchesneau, Louis; Volz, Richard A.; Mudge, Trevor N.; Schultze, Timothy
1986-01-01
The issue of real-time performance measurements for the Ada programming language through the use of benchmarks is addressed. First, the Ada notion of time is examined and a set of basic measurement techniques are developed. Then a set of Ada language features believed to be important for real-time performance are presented and specific measurement methods discussed. In addition, other important time related features which are not explicitly part of the language but are part of the run-time related features which are not explicitly part of the language but are part of the run-time system are also identified and measurement techniques developed. The measurement techniques are applied to the language and run-time system features and the results are presented.
NASA Technical Reports Server (NTRS)
1976-01-01
Analysis of the proposed run around coil system indicates that it offers a decrease in steam, electricity and water consumptions. The run around coil system consist of two coils, a precooling coil which will be located at up stream and a reheating coil which will be located at down stream of the chilled water spray chamber. This system will provide the necessary reheat in summer, spring and fall. At times, if the run around coil system can not provide the necessary reheat, the existing reheat coil could be utilized.
Optimization and Control of Cyber-Physical Vehicle Systems
Bradley, Justin M.; Atkins, Ella M.
2015-01-01
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541
Optimization and Control of Cyber-Physical Vehicle Systems.
Bradley, Justin M; Atkins, Ella M
2015-09-11
A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
NASA Astrophysics Data System (ADS)
Alessio, F.; Barandela, M. C.; Callot, O.; Duval, P.-Y.; Franek, B.; Frank, M.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Neufeld, N.; Sambade, A.; Schwemmer, R.; Somogyi, P.
2010-04-01
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented
NASA Technical Reports Server (NTRS)
Mckay, C. W.; Bown, R. L.
1985-01-01
The paper discusses the importance of linking Ada Run Time Support Environments to the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). A non-stop network operating systems scenario is presented to serve as a forum for identifying the important issues. The network operating system exemplifies the issues involved in the NASA Space Station data management system.
HAL/S-360 compiler system specification
NASA Technical Reports Server (NTRS)
Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.
1974-01-01
A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.
A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems. PMID:25435870
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
Clark, Ross A; Paterson, Kade; Ritchie, Callan; Blundell, Simon; Bryant, Adam L
2011-03-01
Commercial timing light systems (CTLS) provide precise measurement of athletes running velocity, however they are often expensive and difficult to transport. In this study an inexpensive, wireless and portable timing light system was created using the infrared camera in Nintendo Wii hand controllers (NWHC). System creation with gold-standard validation. A Windows-based software program using NWHC to replicate a dual-beam timing gate was created. Firstly, data collected during 2m walking and running trials were validated against a 3D kinematic system. Secondly, data recorded during 5m running trials at various intensities from standing or flying starts were compared to a single beam CTLS and the independent and average scores of three handheld stopwatch (HS) operators. Intraclass correlation coefficient and Bland-Altman plots were used to assess validity. Absolute error quartiles and percentage of trials in absolute error threshold ranges were used to determine accuracy. The NWHC system was valid when compared against the 3D kinematic system (ICC=0.99, median absolute error (MAR)=2.95%). For the flying 5m trials the NWHC system possessed excellent validity and precision (ICC=0.97, MAR<3%) when compared with the CTLS. In contrast, the NWHC system and the HS values during standing start trials possessed only modest validity (ICC<0.75) and accuracy (MAR>8%). A NWHC timing light system is inexpensive, portable and valid for assessing running velocity. Errors in the 5m standing start trials may have been due to erroneous event detection by either the commercial or NWHC-based timing light systems. Copyright © 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Implementation of a Learning Design Run-Time Environment for the .LRN Learning Management System
ERIC Educational Resources Information Center
del Cid, Jose Pablo Escobedo; de la Fuente Valentin, Luis; Gutierrez, Sergio; Pardo, Abelardo; Kloos, Carlos Delgado
2007-01-01
The IMS Learning Design specification aims at capturing the complete learning flow of courses, without being restricted to a particular pedagogical model. Such flow description for a course, called a Unit of Learning, must be able to be reproduced in different systems using a so called run-time environment. In the last few years there has been…
Performance Analysis of and Tool Support for Transactional Memory on BG/Q
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schindewolf, M
2011-12-08
Martin Schindewolf worked during his internship at the Lawrence Livermore National Laboratory (LLNL) under the guidance of Martin Schulz at the Computer Science Group of the Center for Applied Scientific Computing. We studied the performance of the TM subsystem of BG/Q as well as researched the possibilities for tool support for TM. To study the performance, we run CLOMP-TM. CLOMP-TM is a benchmark designed for the purpose to quantify the overhead of OpenMP and compare different synchronization primitives. To advance CLOMP-TM, we added Message Passing Interface (MPI) routines for a hybrid parallelization. This enables to run multiple MPI tasks, eachmore » running OpenMP, on one node. With these enhancements, a beneficial MPI task to OpenMP thread ratio is determined. Further, the synchronization primitives are ranked as a function of the application characteristics. To demonstrate the usefulness of these results, we investigate a real Monte Carlo simulation called Monte Carlo Benchmark (MCB). Applying the lessons learned yields the best task to thread ratio. Further, we were able to tune the synchronization by transactifying the MCB. Further, we develop tools that capture the performance of the TM run time system and present it to the application's developer. The performance of the TM run time system relies on the built-in statistics. These tools use the Blue Gene Performance Monitoring (BGPM) interface to correlate the statistics from the TM run time system with performance counter values. This combination provides detailed insights in the run time behavior of the application and enables to track down the cause of degraded performance. Further, one tool has been implemented that separates the performance counters in three categories: Successful Speculation, Unsuccessful Speculation and No Speculation. All of the tools are crafted around IBM's xlc compiler for C and C++ and have been run and tested on a Q32 early access system.« less
Research on memory management in embedded systems
NASA Astrophysics Data System (ADS)
Huang, Xian-ying; Yang, Wu
2005-12-01
Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.
State estimator for multisensor systems with irregular sampling and time-varying delays
NASA Astrophysics Data System (ADS)
Peñarrocha, I.; Sanchis, R.; Romero, J. A.
2012-08-01
This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Li, Yiming; Qian, Mingli; Li, Long; Li, Bin
2014-07-01
This paper proposed a real-time monitoring system for running status of medical monitors based on the internet of things. In the aspect of hardware, a solution of ZigBee networks plus 470 MHz networks is proposed. In the aspect of software, graphical display of monitoring interface and real-time equipment failure alarm is implemented. The system has the function of remote equipment failure detection and wireless localization, which provides a practical and effective method for medical equipment management.
Nüesch, Corina; Roos, Elena; Pagenstert, Geert; Mündermann, Annegret
2017-05-24
Inertial sensor systems are becoming increasingly popular for gait analysis because their use is simple and time efficient. This study aimed to compare joint kinematics measured by the inertial sensor system RehaGait® with those of an optoelectronic system (Vicon®) for treadmill walking and running. Additionally, the test re-test repeatability of kinematic waveforms and discrete parameters for the RehaGait® was investigated. Twenty healthy runners participated in this study. Inertial sensors and reflective markers (PlugIn Gait) were attached according to respective guidelines. The two systems were started manually at the same time. Twenty consecutive strides for walking and running were recorded and each software calculated sagittal plane ankle, knee and hip kinematics. Measurements were repeated after 20min. Ensemble means were analyzed calculating coefficients of multiple correlation for waveforms and root mean square errors (RMSE) for waveforms and discrete parameters. After correcting the offset between waveforms, the two systems/models showed good agreement with coefficients of multiple correlation above 0.950 for walking and running. RMSE of the waveforms were below 5° for walking and below 8° for running. RMSE for ranges of motion were between 4° and 9° for walking and running. Repeatability analysis of waveforms showed very good to excellent coefficients of multiple correlation (>0.937) and RMSE of 3° for walking and 3-7° for running. These results indicate that in healthy subjects sagittal plane joint kinematics measured with the RehaGait® are comparable to those using a Vicon® system/model and that the measured kinematics have a good repeatability, especially for walking. Copyright © 2017 Elsevier Ltd. All rights reserved.
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
MyCoach: In Situ User Evaluation of a Virtual and Physical Coach for Running
NASA Astrophysics Data System (ADS)
Biemans, Margit; Haaker, Timber; Szwajcer, Ellen
Running is an enjoyable exercise for many people today. Trainers help people to reach running goals. However, today’s busy and nomadic people are not always able to attend running classes. A combination of a virtual and physical coach should be useful. A virtual coach (MyCoach) was designed to provide this support. MyCoach consists of a mobile phone (real time) and a web application, with a focus on improving health and well-being. A randomised controlled trial was performed to evaluate MyCoach. The results indicate that the runners value the tangible aspects on monitoring and capturing their exercise and analysing progress. The system could be improved by incorporating running schedules provided by the physical trainer and by improving its usability. Extensions of the system should focus on the real-time aspects of information sharing and “physical” coaching at a distance.
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
2016-08-01
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
de Visser, Leonie; van den Bos, Ruud; Spruijt, Berry M
2005-05-28
This paper introduces automated observations in a modular home cage system as a tool to measure the effects of wheel running on the time distribution and daily organization of cage floor locomotor activity in female C57BL/6 mice. Mice (n = 16) were placed in the home cage system for 6 consecutive days. Fifty percent of the subjects had free access to a running wheel that was integrated in the home cage. Overall activity levels in terms of duration of movement were increased by wheel running, while time spent inside a sheltering box was decreased. Wheel running affected the hourly pattern of movement during the animals' active period of the day. Mice without a running wheel, in contrast to mice with a running wheel, showed a clear differentiation between novelty-induced and baseline levels of locomotion as reflected by a decrease after the first day of introduction to the home cage. The results are discussed in the light of the use of running wheels as a tool to measure general activity and as an object for environmental enrichment. Furthermore, the possibilities of using automated home cage observations for e.g. behavioural phenotyping are discussed.
Effect of metrology time delay on overlay APC
NASA Astrophysics Data System (ADS)
Carlson, Alan; DiBiase, Debra
2002-07-01
The run-to-run control strategy of lithography APC is primarily composed of a feedback loop as shown in the diagram below. It is known that the insertion of a time delay in a feedback loop can cause degradation in control performance and could even cause a stable system to become unstable, if the time delay becomes sufficiently large. Many proponents of integrated metrology methods have cited the damage caused by metrology time delays as the primary justification for moving from a stand-alone to integrated metrology. While there is little dispute over the qualitative form of this argument, there has been very light published about the quantitative effects under real fab conditions - precisely how much control is lost due to these time delays. Another issue regarding time delays is that the length of these delays is not typically fixed - they vary from lot to lot and in some cases this variance can be large - from one hour on the short side to over 32 hours on the long side. Concern has been expressed that the variability in metrology time delays can cause undesirable dynamics in feedback loops that make it difficult to optimize feedback filters and gains and at worst could drive a system unstable. By using data from numerous fabs, spanning many sizes and styles of operation, we have conducted a quantitative study of the time delay effect on overlay run- to-run control. Our analysis resulted in the following conclusions: (1) There is a significant and material relationship between metrology time delay and overlay control under a variety of real world production conditions. (2) The run-to-run controller can be configured to minimize sensitivity to time delay variations. (3) The value of moving to integrated metrology can be quantified.
Decrease in medical command errors with use of a "standing orders" protocol system.
Holliman, C J; Wuerz, R C; Meador, S A
1994-05-01
The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)
Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei
2008-10-28
Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.
Adaptive real-time methodology for optimizing energy-efficient computing
Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA
2011-06-28
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.
Application configuration selection for energy-efficient execution on multicore systems
Wang, Shinan; Luo, Bing; Shi, Weisong; ...
2015-09-21
Balanced performance and energy consumption are incorporated in the design of modern computer systems. Several runtime factors, such as concurrency levels, thread mapping strategies, and dynamic voltage and frequency scaling (DVFS) should be considered in order to achieve optimal energy efficiency fora workload. Selecting appropriate run-time factors, however, is one of the most challenging tasks because the run-time factors are architecture-specific and workload-specific. And while most existing works concentrate on either static analysis of the workload or run-time prediction results, we present a hybrid two-step method that utilizes concurrency levels and DVFS settings to achieve the energy efficiency configuration formore » a worldoad. The experimental results based on a Xeon E5620 server with NPB and PARSEC benchmark suites show that the model is able to predict the energy efficient configuration accurately. On average, an additional 10% EDP (Energy Delay Product) saving is obtained by using run-time DVFS for the entire system. An off-line optimal solution is used to compare with the proposed scheme. Finally, the experimental results show that the average extra EDP saved by the optimal solution is within 5% on selective parallel benchmarks.« less
Real-time acquisition and tracking system with multiple Kalman filters
NASA Astrophysics Data System (ADS)
Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.
1994-07-01
The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.
An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation
Nutaro, James
2014-11-03
In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems.
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-02-24
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes.
Holistic Context-Sensitivity for Run-Time Optimization of Flexible Manufacturing Systems
Scholze, Sebastian; Barata, Jose; Stokic, Dragan
2017-01-01
Highly flexible manufacturing systems require continuous run-time (self-) optimization of processes with respect to diverse parameters, e.g., efficiency, availability, energy consumption etc. A promising approach for achieving (self-) optimization in manufacturing systems is the usage of the context sensitivity approach based on data streaming from high amount of sensors and other data sources. Cyber-physical systems play an important role as sources of information to achieve context sensitivity. Cyber-physical systems can be seen as complex intelligent sensors providing data needed to identify the current context under which the manufacturing system is operating. In this paper, it is demonstrated how context sensitivity can be used to realize a holistic solution for (self-) optimization of discrete flexible manufacturing systems, by making use of cyber-physical systems integrated in manufacturing systems/processes. A generic approach for context sensitivity, based on self-learning algorithms, is proposed aiming at a various manufacturing systems. The new solution encompasses run-time context extractor and optimizer. Based on the self-learning module both context extraction and optimizer are continuously learning and improving their performance. The solution is following Service Oriented Architecture principles. The generic solution is developed and then applied to two very different manufacturing processes. PMID:28245564
NASA Astrophysics Data System (ADS)
Yu, Haijun; Li, Guofu; Duo, Liping; Jin, Yuqi; Wang, Jian; Sang, Fengting; Kang, Yuanfu; Li, Liucheng; Wang, Yuanhu; Tang, Shukai; Yu, Hongliang
2015-02-01
A user-friendly data acquisition and control system (DACS) for a pulsed chemical oxygen -iodine laser (PCOIL) has been developed. It is implemented by an industrial control computer,a PLC, and a distributed input/output (I/O) module, as well as the valve and transmitter. The system is capable of handling 200 analogue/digital channels for performing various operations such as on-line acquisition, display, safety measures and control of various valves. These operations are controlled either by control switches configured on a PC while not running or by a pre-determined sequence or timings during the run. The system is capable of real-time acquisition and on-line estimation of important diagnostic parameters for optimization of a PCOIL. The DACS system has been programmed using software programmable logic controller (PLC). Using this DACS, more than 200 runs were given performed successfully.
Missed deadline notification in best-effort schedulers
NASA Astrophysics Data System (ADS)
Banachowski, Scott A.; Wu, Joel; Brandt, Scott A.
2003-12-01
It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.
78 FR 42595 - Marine Vapor Control Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-16
... estimates for certifications and during dry runs and witnessed wet loads should take into recertifications to reflect labor necessary for dry runs account the time spent waiting for items to be corrected and...
Simulation of a Real-Time Local Data Integration System over East-Central Florida
NASA Technical Reports Server (NTRS)
Case, Jonathan
1999-01-01
The Applied Meteorology Unit (AMU) simulated a real-time configuration of a Local Data Integration System (LDIS) using data from 15-28 February 1999. The objectives were to assess the utility of a simulated real-time LDIS, evaluate and extrapolate system performance to identify the hardware necessary to run a real-time LDIS, and determine the sensitivities of LDIS. The ultimate goal for running LDIS is to generate analysis products that enhance short-range (less than 6 h) weather forecasts issued in support of the 45th Weather Squadron, Spaceflight Meteorology Group, and Melbourne National Weather Service operational requirements. The simulation used the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) software on an IBM RS/6000 workstation with a 67-MHz processor. This configuration ran in real-time, but not sufficiently fast for operational requirements. Thus, the AMU recommends a workstation with a 200-MHz processor and 512 megabytes of memory to run the AMU's configuration of LDIS in real-time. This report presents results from two case studies and several data sensitivity experiments. ADAS demonstrates utility through its ability to depict high-resolution cloud and wind features in a variety of weather situations. The sensitivity experiments illustrate the influence of disparate data on the resulting ADAS analyses.
Level-2 Milestone 3244: Deploy Dawn ID Machine for Initial Science Runs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, D
2009-09-21
This report documents the delivery, installation, integration, testing, and acceptance of the Dawn system, ASC L2 milestone 3244: Deploy Dawn ID Machine for Initial Science Runs, due September 30, 2009. The full text of the milestone is included in Attachment 1. The description of the milestone is: This milestone will be a result of work started three years ago with the planning for a multi-petaFLOPS UQ-focused platform (Sequoia) and will be satisfied when a smaller ID version of the final system is delivered, installed, integrated, tested, accepted, and deployed at LLNL for initial science runs in support of SSP mission.more » The deliverable for this milestone will be a LA petascale computing system (named Dawn) usable for code development and scaling necessary to ensure effective use of a final Sequoia platform (expected in 2011-2012), and for urgent SSP program needs. Allocation and scheduling of Dawn as an LA system will likely be performed informally, similar to what has been used for BlueGene/L. However, provision will be made to allow for dedicated access times for application scaling studies across the entire Dawn resource. The milestone was completed on April 1, 2009, when science runs began running on the Dawn system. The following sections describe the Dawn system architecture, current status, installation and integration time line, and testing and acceptance process. A project plan is included as Attachment 2. Attachment 3 is a letter certifying the handoff of the system to a nuclear weapons stockpile customer. Attachment 4 presents the results of science runs completed on the system.« less
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
Adaptive real-time methodology for optimizing energy-efficient computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Feng, Wu-Chun
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to eachmore » process running on a system.« less
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
Smyth, L G; Martin, Z; Hall, B; Collins, D; Mealy, K
2012-09-01
Public and political pressures are increasing on doctors and in particular surgeons to demonstrate competence assurance. While surgical audit is an integral part of surgical practice, its implementation and delivery at a national level in Ireland is poorly developed. Limits to successful audit systems relate to lack of funding and administrative support. In Wexford General Hospital, we have a comprehensive audit system which is based on the Lothian Surgical Audit system. We wished to analyse the amount of time required by the Consultant, NCHDs and clerical staff on one surgical team to run a successful audit system. Data were collected over a calendar month. This included time spent coding and typing endoscopy procedures, coding and typing operative procedures, and typing and signing discharge letters. The total amount of time spent to run the audit system for one Consultant surgeon for one calendar month was 5,168 min or 86.1 h. Greater than 50% of this time related to work performed by administrative staff. Only the intern and administrative staff spent more than 5% of their working week attending to work related to the audit. An integrated comprehensive audit system requires a very little time input by Consultant surgeons. Greater than 90% of the workload in running the audit was performed by the junior house doctors and administrative staff. The main financial implications for national audit implementation would relate to software and administrative staff recruitment. Implementation of the European Working Time Directive in Ireland may limit the time available for NCHD's to participate in clinical audit.
Real-time 3D change detection of IEDs
NASA Astrophysics Data System (ADS)
Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David
2012-06-01
Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.
Transitionless driving on adiabatic search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907
We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less
PalymSys (TM): An extended version of CLIPS for construction and reasoning using blackboards
NASA Technical Reports Server (NTRS)
Bryson, Travis; Ballard, Dan
1994-01-01
This paper describes PalymSys(TM) -- an extended version of the CLIPS language that is designed to facilitate the implementation of blackboard systems. The paper first describes the general characteristics of blackboards and shows how a control blackboard architecture can be used by AI systems to examine their own behavior and adapt to real-time problem-solving situations by striking a balance between domain and control reasoning. The paper then describes the use of PalymSys in the development of a situation assessment subsystem for use aboard Army helicopters. This system performs real-time inferencing about the current battlefield situation using multiple domain blackboards as well as a control blackboard. A description of the control and domain blackboards and their implementation is presented. The paper also describes modifications made to the standard CLIPS 6.02 language in PalymSys(TM) 2.0. These include: (1) a dynamic Dempster-Shafer belief network whose structure is completely specifiable at run-time in the consequent of a PalymSys(TM) rule, (2) extension of the run command including a continuous run feature that enables the system to run even when the agenda is empty, and (3) a built-in communications link that uses shared memory to communicate with other independent processes.
The automation of an inlet mass flow control system
NASA Technical Reports Server (NTRS)
Supplee, Frank; Tcheng, Ping; Weisenborn, Michael
1989-01-01
The automation of a closed-loop computer controlled system for the inlet mass flow system (IMFS) developed for a wind tunnel facility at Langley Research Center is presented. This new PC based control system is intended to replace the manual control system presently in use in order to fully automate the plug positioning of the IMFS during wind tunnel testing. Provision is also made for communication between the PC and a host-computer in order to allow total animation of the plug positioning and data acquisition during the complete sequence of predetermined plug locations. As extensive running time is programmed for the IMFS, this new automated system will save both manpower and tunnel running time.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
The Aerospace Energy Systems Laboratory: A BITBUS networking application
NASA Technical Reports Server (NTRS)
Glover, Richard D.; Oneill-Rood, Nora
1989-01-01
The NASA Ames-Dryden Flight Research Facility developed a computerized aircraft battery servicing facility called the Aerospace Energy Systems Laboratory (AESL). This system employs distributed processing with communications provided by a 2.4-megabit BITBUS local area network. Customized handlers provide real time status, remote command, and file transfer protocols between a central system running the iRMX-II operating system and ten slave stations running the iRMX-I operating system. The hardware configuration and software components required to implement this BITBUS application are required.
Amendola, Alessandra; Coen, Sabrina; Belladonna, Stefano; Pulvirenti, F Renato; Clemens, John M; Capobianchi, M Rosaria
2011-08-01
Diagnostic laboratories need automation that facilitates efficient processing and workflow management to meet today's challenges for expanding services and reducing cost, yet maintaining the highest levels of quality. Processing efficiency of two commercially available automated systems for quantifying HIV-1 and HCV RNA, Abbott m2000 system and Roche COBAS Ampliprep/COBAS TaqMan 96 (docked) systems (CAP/CTM), was evaluated in a mid/high throughput workflow laboratory using a representative daily workload of 24 HCV and 72 HIV samples. Three test scenarios were evaluated: A) one run with four batches on the CAP/CTM system, B) two runs on the Abbott m2000 and C) one run using the Abbott m2000 maxCycle feature (maxCycle) for co-processing these assays. Cycle times for processing, throughput and hands-on time were evaluated. Overall processing cycle time was 10.3, 9.1 and 7.6 h for Scenarios A), B) and C), respectively. Total hands-on time for each scenario was, in order, 100.0 (A), 90.3 (B) and 61.4 min (C). The interface of an automated analyzer to the laboratory workflow, notably system set up for samples and reagents and clean up functions, are as important as the automation capability of the analyzer for the overall impact to processing efficiency and operator hands-on time.
NASA Technical Reports Server (NTRS)
Manobianco, John; Zack, John W.; Taylor, Gregory E.
1996-01-01
This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.
Assessing experience in the deliberate practice of running using a fuzzy decision-support system
Roveri, Maria Isabel; Manoel, Edison de Jesus; Onodera, Andrea Naomi; Ortega, Neli R. S.; Tessutti, Vitor Daniel; Vilela, Emerson; Evêncio, Nelson
2017-01-01
The judgement of skill experience and its levels is ambiguous though it is crucial for decision-making in sport sciences studies. We developed a fuzzy decision support system to classify experience of non-elite distance runners. Two Mamdani subsystems were developed based on expert running coaches’ knowledge. In the first subsystem, the linguistic variables of training frequency and volume were combined and the output defined the quality of running practice. The second subsystem yielded the level of running experience from the combination of the first subsystem output with the number of competitions and practice time. The model results were highly consistent with the judgment of three expert running coaches (r>0.88, p<0.001) and also with five other expert running coaches (r>0.86, p<0.001). From the expert’s knowledge and the fuzzy model, running experience is beyond the so-called "10-year rule" and depends not only on practice time, but on the quality of practice (training volume and frequency) and participation in competitions. The fuzzy rule-based model was very reliable, valid, deals with the marked ambiguities inherent in the judgment of experience and has potential applications in research, sports training, and clinical settings. PMID:28817655
Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON
NASA Astrophysics Data System (ADS)
Morrissey, Kevin
A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Isocapnic hyperpnea training improves performance in competitive male runners.
Leddy, John J; Limprasertkul, Atcharaporn; Patel, Snehal; Modlich, Frank; Buyea, Cathy; Pendergast, David R; Lundgren, Claes E G
2007-04-01
The effects of voluntary isocapnic hyperpnea (VIH) training (10 h over 4 weeks, 30 min/day) on ventilatory system and running performance were studied in 15 male competitive runners, 8 of whom trained twice weekly for 3 more months. Control subjects (n = 7) performed sham-VIH. Vital capacity (VC), FEV1, maximum voluntary ventilation (MVV), maximal inspiratory and expiratory mouth pressures, VO2max, 4-mile run time, treadmill run time to exhaustion at 80% VO2max, serum lactate, total ventilation (V(E)), oxygen consumption (VO2) oxygen saturation and cardiac output were measured before and after 4 weeks of VIH. Respiratory parameters and 4-mile run time were measured monthly during the 3-month maintenance period. There were no significant changes in post-VIH VC and FEV1 but MVV improved significantly (+10%). Maximal inspiratory and expiratory mouth pressures, arterial oxygen saturation and cardiac output did not change post-VIH. Respiratory and running performances were better 7- versus 1 day after VIH. Seven days post-VIH, respiratory endurance (+208%) and treadmill run time (+50%) increased significantly accompanied by significant reductions in respiratory frequency (-6%), V(E) (-7%), VO2 (-6%) and lactate (-18%) during the treadmill run. Post-VIH 4-mile run time did not improve in the control group whereas it improved in the experimental group (-4%) and remained improved over a 3 month period of reduced VIH frequency. The improvements cannot be ascribed to improved blood oxygen delivery to muscle or to psychological factors.
Performance Evaluation of a Firm Real-Time DataBase System
1995-01-01
after its deadline has passed. StarBase differs from previous real-time database work in that a) it relies on a real - time operating system which...StarBase, running on a real - time operating system kernel, RT-Mach. We discuss how performance was evaluated in StarBase using the StarBase workload
Time and Space Partitioning the EagleEye Reference Misson
NASA Astrophysics Data System (ADS)
Bos, Victor; Mendham, Peter; Kauppinen, Panu; Holsti, Niklas; Crespo, Alfons; Masmano, Miguel; de la Puente, Juan A.; Zamorano, Juan
2013-08-01
We discuss experiences gained by porting a Software Validation Facility (SVF) and a satellite Central Software (CSW) to a platform with support for Time and Space Partitioning (TSP). The SVF and CSW are part of the EagleEye Reference mission of the European Space Agency (ESA). As a reference mission, EagleEye is a perfect candidate to evaluate practical aspects of developing satellite CSW for and on TSP platforms. The specific TSP platform we used consists of a simulated LEON3 CPU controlled by the XtratuM separation micro-kernel. On top of this, we run five separate partitions. Each partition runs its own real-time operating system or Ada run-time kernel, which in turn are running the application software of the CSW. We describe issues related to partitioning; inter-partition communication; scheduling; I/O; and fault-detection, isolation, and recovery (FDIR).
Damasceno, Mayara V.; Duarte, Marcos; Pasqua, Leonardo A.; Lima-Silva, Adriano E.; MacIntosh, Brian R.; Bertuzzi, Rômulo
2014-01-01
Purpose Previous studies report that static stretching (SS) impairs running economy. Assuming that pacing strategy relies on rate of energy use, this study aimed to determine whether SS would modify pacing strategy and performance in a 3-km running time-trial. Methods Eleven recreational distance runners performed a) a constant-speed running test without previous SS and a maximal incremental treadmill test; b) an anthropometric assessment and a constant-speed running test with previous SS; c) a 3-km time-trial familiarization on an outdoor 400-m track; d and e) two 3-km time-trials, one with SS (experimental situation) and another without (control situation) previous static stretching. The order of the sessions d and e were randomized in a counterbalanced fashion. Sit-and-reach and drop jump tests were performed before the 3-km running time-trial in the control situation and before and after stretching exercises in the SS. Running economy, stride parameters, and electromyographic activity (EMG) of vastus medialis (VM), biceps femoris (BF) and gastrocnemius medialis (GA) were measured during the constant-speed tests. Results The overall running time did not change with condition (SS 11:35±00:31 s; control 11:28±00:41 s, p = 0.304), but the first 100 m was completed at a significantly lower velocity after SS. Surprisingly, SS did not modify the running economy, but the iEMG for the BF (+22.6%, p = 0.031), stride duration (+2.1%, p = 0.053) and range of motion (+11.1%, p = 0.0001) were significantly modified. Drop jump height decreased following SS (−9.2%, p = 0.001). Conclusion Static stretch impaired neuromuscular function, resulting in a slow start during a 3-km running time-trial, thus demonstrating the fundamental role of the neuromuscular system in the self-selected speed during the initial phase of the race. PMID:24905918
Hardware-In-The-Loop Power Extraction Using Different Real-Time Platforms (Postprint)
2008-11-01
each real - time operating system . However, discrepancies in test results obtained from the NI system can be resolved. This paper briefly details...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink models
Stone, Brandon L; Heishman, Aaron D; Campbell, Jay A
2017-07-31
The purpose of this study was to compare the effects of an experimental versus traditional military run training on 2-mile run ability in Army Reserve Officer Training Corps (ROTC) cadets. Fifty college-aged cadets were randomly placed into two groups and trained for four weeks with either an experimental running program (EXP, n=22) comprised of RPE intensity-specific, energy system based intervals or with traditional military running program (TRA, n=28) utilizing a crossover study design. A 2-mile run assessment was performed just prior to the start, at the end of the first 4 weeks, and again after the second 4 weeks of training following crossover. The EXP program significantly decreased 2-mile run times (961.3s ± 155.8s to 943.4 ± 140.2s, P=0.012, baseline to post 1) while the TRA group experienced a significant increase in run times (901.0 ± 79.2s vs. 913.9 ± 82.9s) over the same training period. There was a moderate effect size (d = 0.61, P=0.07) for the experimental run program to "reverse" the adverse effects of the traditional program within the 4-week training period (post 1 to post 2) following treatment crossover. Thus, for short-term training of military personnel, RPE intensity specific running program comprised of aerobic and anaerobic system development can enhance 2-mile run performance superior of a traditional program while reducing training volume (60 min per session vs. 43.2 min per session, respectively). Future research should extend the training period to determine efficacy of this training approach for long term improvement of aerobic capacity and possible reduction of musculoskeletal injury.
Salivary cortisol and α-amylase responses to repeated bouts of downhill running.
Mckune, Andrew J; Bach, Christopher W; Semple, Stuart J; Dyer, Barry J
2014-01-01
To determine the hypothalamic-pituitary-adrenal (HPA) axis and sympathoadrenal (SA) system response to repeated bouts of downhill running. Eleven active but untrained males (age: 19.7 ± 0.4 y; VO2peak 47.8 ± 3.6 ml/kg/min) performed two 60 min bouts of downhill running (-13.5% gradient), separated by 14 days, at a speed eliciting 75% of their VO2peak on a level grade. Saliva samples were collected before (baseline), after, and every hour for 12 h and every 24 h for 6 days after each run. Salivary cortisol and α-amylase levels were measured as markers of the HPA axis and SA response, respectively. Results were analyzed using repeated measures ANOVA (12 h period: 2 × 14; 24 h intervals 2 × 7, P ≤ 0.05) with Tukey post-hoc tests where appropriate. Paired samples t-tests were used to compare collapsed data vs. baseline measurements. There were no significant group × time interactions for cortisol or α-amylase for the hourly samples up to 12 h after each run, nor for the 24 h samples up to 6 days later. The 24 h samples for α-amylase showed a significant group effect between runs (Run 1: 69.77 ± 7.68 vs. Run 2: 92.19 ± 7.67 U/ml; P = 0.04). Significant time effects were measured for both cortisol (decreased 2 h to 12 h post-run) and α-amylase (elevated immediately after, 1 h and 2 h post-run) (P < 0.001). The 24 h period group effect for salivary α-amylase suggested an adaptation in the sympathoadrenal system that may alter the systemic inflammatory response to exercise-induced muscle damage but may also reflect enhanced mucosal immunity. © 2014 Wiley Periodicals, Inc.
Commanding and Controlling Satellite Clusters (IEEE Intelligent Systems, November/December 2000)
2000-01-01
real - time operating system , a message-passing OS well suited for distributed...ground Flight processors ObjectAgent RTOS SCL RTOS RDMS Space command language Real - time operating system Rational database management system TS-21 RDMS...engineer with Princeton Satellite Systems. She is working with others to develop ObjectAgent software to run on the OSE Real Time Operating System .
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
Pathways to designing and running an operational flood forecasting system: an adventure game!
NASA Astrophysics Data System (ADS)
Arnal, Louise; Pappenberger, Florian; Ramos, Maria-Helena; Cloke, Hannah; Crochemore, Louise; Giuliani, Matteo; Aalbers, Emma
2017-04-01
In the design and building of an operational flood forecasting system, a large number of decisions have to be taken. These include technical decisions related to the choice of the meteorological forecasts to be used as input to the hydrological model, the choice of the hydrological model itself (its structure and parameters), the selection of a data assimilation procedure to run in real-time, the use (or not) of a post-processor, and the computing environment to run the models and display the outputs. Additionally, a number of trans-disciplinary decisions are also involved in the process, such as the way the needs of the users will be considered in the modelling setup and how the forecasts (and their quality) will be efficiently communicated to ensure usefulness and build confidence in the forecasting system. We propose to reflect on the numerous, alternative pathways to designing and running an operational flood forecasting system through an adventure game. In this game, the player is the protagonist of an interactive story driven by challenges, exploration and problem-solving. For this presentation, you will have a chance to play this game, acting as the leader of a forecasting team at an operational centre. Your role is to manage the actions of your team and make sequential decisions that impact the design and running of the system in preparation to and during a flood event, and that deal with the consequences of the forecasts issued. Your actions are evaluated by how much they cost you in time, money and credibility. Your aim is to take decisions that will ultimately lead to a good balance between time and money spent, while keeping your credibility high over the whole process. This game was designed to highlight the complexities behind decision-making in an operational forecasting and emergency response context, in terms of the variety of pathways that can be selected as well as the timescale, cost and timing of effective actions.
A real-time data-acquisition and analysis system with distributed UNIX workstations
NASA Astrophysics Data System (ADS)
Yamashita, H.; Miyamoto, K.; Maruyama, K.; Hirosawa, H.; Nakayoshi, K.; Emura, T.; Sumi, Y.
1996-02-01
A compact data-acquisition system using three RISC/UNIX™ workstations (SUN™/SPARCstation™) with real-time capabilities of monitoring and analysis has been developed for the study of photonuclear reactions with the large-acceptance spectrometer TAGX. One workstation acquires data from memory modules in the front-end electronics (CAMAC and TKO) with a maximum speed of 300 Kbytes/s, where data size times instantaneous rate is 1 Kbyte × 300 Hz. Another workstation, which has real-time capability for run monitoring, gets the data with a buffer manager called NOVA. The third workstation analyzes the data and reconstructs the event. In addition to a general hardware and software description, priority settings and run control by shell scripts are described. This system has recently been used successfully in a two month long experiment.
Characteristics of Operational Space Weather Forecasting: Observations and Models
NASA Astrophysics Data System (ADS)
Berger, Thomas; Viereck, Rodney; Singer, Howard; Onsager, Terry; Biesecker, Doug; Rutledge, Robert; Hill, Steven; Akmaev, Rashid; Milward, George; Fuller-Rowell, Tim
2015-04-01
In contrast to research observations, models and ground support systems, operational systems are characterized by real-time data streams and run schedules, with redundant backup systems for most elements of the system. We review the characteristics of operational space weather forecasting, concentrating on the key aspects of ground- and space-based observations that feed models of the coupled Sun-Earth system at the NOAA/Space Weather Prediction Center (SWPC). Building on the infrastructure of the National Weather Service, SWPC is working toward a fully operational system based on the GOES weather satellite system (constant real-time operation with back-up satellites), the newly launched DSCOVR satellite at L1 (constant real-time data network with AFSCN backup), and operational models of the heliosphere, magnetosphere, and ionosphere/thermosphere/mesophere systems run on the Weather and Climate Operational Super-computing System (WCOSS), one of the worlds largest and fastest operational computer systems that will be upgraded to a dual 2.5 Pflop system in 2016. We review plans for further operational space weather observing platforms being developed in the context of the Space Weather Operations Research and Mitigation (SWORM) task force in the Office of Science and Technology Policy (OSTP) at the White House. We also review the current operational model developments at SWPC, concentrating on the differences between the research codes and the modified real-time versions that must run with zero fault tolerance on the WCOSS systems. Understanding the characteristics and needs of the operational forecasting community is key to producing research into the coupled Sun-Earth system with maximal societal benefit.
[A design of simple ventilator control system based on LabVIEW].
Pei, Baoqing; Xu, Shengwei; Li, Hui; Li, Deyu; Pei, Yidong; He, Haixing
2011-01-01
This paper designed a ventilator control system to control proportional valves and motors. It used LabVIEW to control the object mentioned above and design ,validate, evaluate arithmetic, and establish hardware in loop platform. There are two system' s hierarchies. The high layer was used to run non-real time program and the low layer was used to run real time program. The two layers communicated through TCP/IP net. The program can be divided into several modules, which can be expanded and maintained easily. And the harvest in the prototype designing can be seamlessly used to embedded products. From all above, this system was useful in employing OEM products.
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
Applying a semantic information Petri Net modeling method to AUV systems design
NASA Astrophysics Data System (ADS)
Feng, Xiao-Ning; Wang, Shuo; Wang, Zhuo; Liu, Qun
2008-12-01
This paper informally introduces colored object-oriented Petri Nets(COOPN) with the application of the AUV system. According to the characteristic of the AUV system’s running environment, the object-oriented method is used in this paper not only to dispart system modules but also construct the refined running model of AUV system, then the colored Petri Net method is used to establish hierarchically detailed model in order to get the performance analyzing information of the system. After analyzing the model implementation, the errors of architecture designing and function realization can be found. If the errors can be modified on time, the experiment time in the pool can be reduced and the cost can be saved.
NASA Technical Reports Server (NTRS)
Roberts, Floyd E., III
1994-01-01
Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.
The NIFFTE Data Acquisition System
NASA Astrophysics Data System (ADS)
Qu, Hai; Niffte Collaboration
2011-10-01
The Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) will employ a novel, high granularity, pressurized Time Projection Chamber to measure fission cross-sections of the major actinides to high precision over a wide incident neutron energy range. These results will improve nuclear data accuracy and benefit the fuel cycle in the future. The NIFFTE data acquisition system (DAQ) has been designed and implemented on the prototype TPC. Lessons learned from engineering runs have been incorporated into some design changes that are being implemented before the next run cycle. A fully instrumented sextant of EtherDAQ cards (16 sectors, 496 channels) will be used for the next run cycle. The Maximum Integrated Data Acquisition System (MIDAS) has been chosen and customized to configure and run the experiment. It also meets the requirement for remote control and monitoring of the system. The integration of the MIDAS online database with the persistent PostgreSQL database has been implemented for experiment usage. The detailed design and current status of the DAQ system will be presented.
The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems
2001-01-01
real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Kumar, Sujay V.; Kuligowski, Robert J.; Langston, Carrie
2013-01-01
The NASA Short ]term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real ]time configuration of the NASA Land Information System (LIS) with the Noah land surface model (LSM). Output from the SPoRT ]LIS run is used to initialize land surface variables for local modeling applications at select National Weather Service (NWS) partner offices, and can be displayed in decision support systems for situational awareness and drought monitoring. The SPoRT ]LIS is run over a domain covering the southern and eastern United States, fully nested within the National Centers for Environmental Prediction Stage IV precipitation analysis grid, which provides precipitation forcing to the offline LIS ]Noah runs. The SPoRT Center seeks to expand the real ]time LIS domain to the entire Continental U.S. (CONUS); however, geographical limitations with the Stage IV analysis product have inhibited this expansion. Therefore, a goal of this study is to test alternative precipitation forcing datasets that can enable the LIS expansion by improving upon the current geographical limitations of the Stage IV product. The four precipitation forcing datasets that are inter ]compared on a 4 ]km resolution CONUS domain include the Stage IV, an experimental GOES quantitative precipitation estimate (QPE) from NESDIS/STAR, the National Mosaic and QPE (NMQ) product from the National Severe Storms Laboratory, and the North American Land Data Assimilation System phase 2 (NLDAS ]2) analyses. The NLDAS ]2 dataset is used as the control run, with each of the other three datasets considered experimental runs compared against the control. The regional strengths, weaknesses, and biases of each precipitation analysis are identified relative to the NLDAS ]2 control in terms of accumulated precipitation pattern and amount, and the impacts on the subsequent LSM spin ]up simulations. The ultimate goal is to identify an alternative precipitation forcing dataset that can best support an expansion of the real ]time SPoRT ]LIS to a domain covering the entire CONUS.
Reliability of Vibrating Mesh Technology.
Gowda, Ashwin A; Cuccia, Ann D; Smaldone, Gerald C
2017-01-01
For delivery of inhaled aerosols, vibrating mesh systems are more efficient than jet nebulizers are and do not require added gas flow. We assessed the reliability of a vibrating mesh nebulizer (Aerogen Solo, Aerogen Ltd, Galway Ireland) suitable for use in mechanical ventilation. An initial observational study was performed with 6 nebulizers to determine run time and efficiency using normal saline and distilled water. Nebulizers were run until cessation of aerosol production was noted, with residual volume and run time recorded. Three controllers were used to assess the impact of the controller on nebulizer function. Following the observational study, a more detailed experimental protocol was performed using 20 nebulizers. For this analysis, 2 controllers were used, and time to cessation of aerosol production was noted. Gravimetric techniques were used to measure residual volume. Total nebulization time and residual volume were recorded. Failure was defined as premature cessation of aerosol production represented by residual volume of > 10% of the nebulizer charge. In the initial observational protocol, an unexpected sporadic failure rate was noted of 25% in 55 experimental runs. In the experimental protocol, a failure rate was noted of 30% in 40 experimental runs. Failed runs in the experimental protocol exhibited a wide range of retained volume averaging ± SD 36 ± 21.3% compared with 3.2 ± 1.5% (P = .001) in successful runs. Small but significant differences existed in nebulization time between controllers. Aerogen Solo nebulization was often randomly interrupted with a wide range of retained volumes. Copyright © 2017 by Daedalus Enterprises.
Tessutti, Vitor; Ribeiro, Ana Paula; Trombini-Souza, Francis; Sacco, Isabel C N
2012-01-01
The practice of running has consistently increased worldwide, and with it, related lower limb injuries. The type of running surface has been associated with running injury etiology, in addition other factors, such as the relationship between the amount and intensity of training. There is still controversy in the literature regarding the biomechanical effects of different types of running surfaces on foot-floor interaction. The aim of this study was to investigate the influence of running on asphalt, concrete, natural grass, and rubber on in-shoe pressure patterns in adult recreational runners. Forty-seven adult recreational runners ran twice for 40 m on all four different surfaces at 12 ± 5% km · h(-1). Peak pressure, pressure-time integral, and contact time were recorded by Pedar X insoles. Asphalt and concrete were similar for all plantar variables and pressure zones. Running on grass produced peak pressures 9.3% to 16.6% lower (P < 0.001) than the other surfaces in the rearfoot and 4.7% to 12.3% (P < 0.05) lower in the forefoot. The contact time on rubber was greater than on concrete for the rearfoot and midfoot. The behaviour of rubber was similar to that obtained for the rigid surfaces - concrete and asphalt - possibly because of its time of usage (five years). Running on natural grass attenuates in-shoe plantar pressures in recreational runners. If a runner controls the amount and intensity of practice, running on grass may reduce the total stress on the musculoskeletal system compared with the total musculoskeletal stress when running on more rigid surfaces, such as asphalt and concrete.
Real-time analysis system for gas turbine ground test acoustic measurements.
Johnston, Robert T
2003-10-01
This paper provides an overview of a data system upgrade to the Pratt and Whitney facility designed for making acoustic measurements on aircraft gas turbine engines. A data system upgrade was undertaken because the return-on-investment was determined to be extremely high. That is, the savings on the first test series recovered the cost of the hardware. The commercial system selected for this application utilizes 48 input channels, which allows either 1/3 octave and/or narrow-band analyses to be preformed real-time. A high-speed disk drive allows raw data from all 48 channels to be stored simultaneously while the analyses are being preformed. Results of tests to ensure compliance of the new system with regulations and with existing systems are presented. Test times were reduced from 5 h to 1 h of engine run time per engine configuration by the introduction of this new system. Conservative cost reduction estimates for future acoustic testing are 75% on items related to engine run time and 50% on items related to the overall length of the test.
New NASA 3D Animation Shows Seven Days of Simulated Earth Weather
2014-08-11
This visualization shows early test renderings of a global computational model of Earth's atmosphere based on data from NASA's Goddard Earth Observing System Model, Version 5 (GEOS-5). This particular run, called Nature Run 2, was run on a supercomputer, spanned 2 years of simulation time at 30 minute intervals, and produced Petabytes of output. The visualization spans a little more than 7 days of simulation time which is 354 time steps. The time period was chosen because a simulated category-4 typhoon developed off the coast of China. The 7 day period is repeated several times during the course of the visualization. Credit: NASA's Scientific Visualization Studio Read more or download here: svs.gsfc.nasa.gov/goto?4180 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Raibert, M H
1986-03-14
Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.
Low-cost optical data acquisition system for blade vibration measurement
NASA Technical Reports Server (NTRS)
Posta, Stephen J.
1988-01-01
A low cost optical data acquisition system was designed to measure deflection of vibrating rotor blade tips. The basic principle of the new design is to record raw data, which is a set of blade arrival times, in memory and to perform all processing by software following a run. This approach yields a simple and inexpensive system with the least possible hardware. Functional elements of the system were breadboarded and operated satisfactorily during rotor simulations on the bench, and during a data collection run with a two-bladed rotor in the Lewis Research Center Spin Rig. Software was written to demonstrate the sorting and processing of data stored in the system control computer, after retrieval from the data acquisition system. The demonstration produced an accurate graphical display of deflection versus time.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Mentat/A: Medium grain parallel processing
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.
1992-01-01
The objective of this project is to test the Algorithm to Architecture Mapping Model (ATAMM) firing rules using the Mentat run-time system and the Mentat Programming Language (MPL). A special version of Mentat, Mentat/A (Mentat/ATAMM) was constructed. This required changes to: (1) modify the run-time system to control queue length and inhibit actor firing until required data tokens are available and space is available in the input queues of all of the direct descendent actors; (2) disallow the specification of persistent object classes in the MPL; and (3) permit only decision free graphs in the MPL. We were successful in implementing the spirit of the plan, although some goals changed as we came to better understand the problem. We report on what we accomplished and the lessons we learned. The Mentat/A run-time system is discussed, and we briefly present the compiler. We present results for three applications and conclude with a summary and some observations. Appendix A contains a list of technical reports and published papers partially supported by the grant. Appendix B contains listings for the three applications.
2018-04-01
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...2006. Since that time , SS-RICS has been the integration platform for many robotics algorithms using a variety of different disciplines from cognitive...voice recognition. Each noise level was run 10 times per gender, yielding 60 total runs. Two paths were chosen for testing (Paths A and B) of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.
Shallow-Water Nitrox Diving, the NASA Experience
NASA Technical Reports Server (NTRS)
Fitzpatrick, Daniel T.
2009-01-01
NASA s Neutral Buoyancy Laboratory (NBL) contains a 6.2 million gallon, 12-meter deep pool where astronauts prepare for space missions involving space walks (extravehicular activity EVA). Training is conducted in a space suit (extravehicular mobility unit EMU) pressurized to 4.0 - 4.3 PSI for up to 6.5 hours while breathing a 46% NITROX mix. Since the facility opened in 1997, over 30,000 hours of suited training has been completed with no occurrence of decompression sickness (DCS) or oxygen toxicity. This study examines the last 5 years of astronaut suited training runs. All suited runs are computer monitored and data is recorded in the Environmental Control System (ECS) database. Astronaut training runs from 2004 - 2008 were reviewed and specific data including total run time, maximum depth and average depth were analyzed. One hundred twenty seven astronauts and cosmonauts completed 2,231 training runs totaling 12,880 exposure hours. Data was available for 96% of the runs. It was revealed that the suit configuration produces a maximum equivalent air depth of 7 meters, essentially eliminating the risk of DCS. Based on average run depth and time, approximately 17% of the training runs exceeded the NOAA oxygen maximum single exposure limits, with no resulting oxygen toxicity. The NBL suited training protocols are safe and time tested. Consideration should be given to reevaluate the NOAA oxygen exposure limits for PO2 levels at or below 1 ATA.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
Real-Time Imaging with a Pulsed Coherent CO, Laser Radar
1997-01-01
30 joule) transmitted energy levels has just begun. The FLD program will conclude in 1997 with the demonstration of a full-up, real - time operating system . This...The master system and VMEbus controller is an off-the-shelf controller based on the Motorola 68040 processor running the VxWorks real time operating system . Application
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
Critical Velocity Is Associated With Combat-Specific Performance Measures in a Special Forces Unit.
Hoffman, Mattan W; Stout, Jeffrey R; Hoffman, Jay R; Landua, Geva; Fukuda, David H; Sharvit, Nurit; Moran, Daniel S; Carmon, Erez; Ostfeld, Ishay
2016-02-01
The purpose of this study was to examine the relationship between critical velocity (CV) and anaerobic distance capacity (ADC) to combat-specific tasks (CST) in a special forces (SFs) unit. Eighteen male soldiers (mean ± SD; age: 19.9 ± 0.8 years; height: 177.6 ± 6.6 cm; body mass: 74.1 ± 5.8 kg; body mass index [BMI]: 23.52 ± 1.63) from an SF unit of the Israel Defense Forces volunteered to complete a 3-minute all-out run along with CST (2.5-km run, 50-m casualty carry, and 30-m repeated sprints with "rush" shooting [RPTDS]). Estimates of CV and ADC from the 3-minute all-out run were determined from data downloaded from a global position system device worn by each soldier, with CV calculated as the average velocity of the final 30 seconds of the run and ADC as the velocity-time integral above CV. Critical velocity exhibited significant negative correlations with the 2.5-km run time (r = -0.62, p < 0.01) and RPTDS time (r = -0.71, p < 0.01). In addition, CV was positively correlated with the average velocity during the 2.5-km run (r = 0.64, p < 0.01). Stepwise regression identified CV as the most significant performance measure associated with the 2.5-km run time, whereas BMI and CV measures were significant predictors of RPTDS time (R(2) = 0.67, p ≤ 0.05). Using the 3-minute all-out run as a testing measurement in combat, personnel may offer a more efficient and simpler way in assessing both aerobic and anaerobic capabilities (CV and ADC) within a relatively large sample.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.
2008-01-01
Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubois, P.F.
1989-05-16
This paper discusses the basis system. Basis is a program development system for scientific programs. It has been developed over the last five years at Lawrence Livermore National Laboratory (LLNL), where it is now used in about twenty major programming efforts. The Basis System includes two major components, a program development system and a run-time package. The run-time package provides the Basis Language interpreter, through which the user does input, output, plotting, and control of the program's subroutines and functions. Variables in the scientific packages are known to this interpreter, so that the user may arbitrarily print, plot, and calculatemore » with, any major program variables. Also provided are facilities for dynamic memory management, terminal logs, error recovery, text-file i/o, and the attachment of non-Basis-developed packages.« less
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; White, Kristopher D.
2014-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, AL (Jedlovec 2013; Ralph et al. 2013; Merceret et al. 2013) is running a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework (hereafter referred to as the "SPoRT-LIS"). Output from the real-time SPoRT-LIS is used for (1) initializing land surface variables for local modeling applications, and (2) displaying in decision support systems for situational awareness and drought monitoring at select NOAA/National Weather Service (NWS) partner offices. The SPoRT-LIS is currently run over a domain covering the southeastern half of the Continental United States (CONUS), with an additional experimental real-time run over the entire CONUS and surrounding portions of southern Canada and northern Mexico. The experimental CONUS run incorporates hourly quantitative precipitation estimation (QPE) from the National Severe Storms Laboratory Multi- Radar Multi-Sensor (MRMS) product (Zhang et al. 2011, 2014), which will be transitioned into operations at the National Centers for Environmental Prediction (NCEP) in Fall 2014. This paper describes the current and experimental SPoRT-LIS configurations, and documents some of the limitations still remaining through the advent of MRMS precipitation analyses in the SPoRT-LIS land surface model (LSM) simulations. Section 2 gives background information on the NASA LIS and describes the realtime SPoRT-LIS configurations being compared. Section 3 presents recent work done to develop a training module on situational awareness applications of real-time SPoRT-LIS output. Comparisons between output from the two SPoRT-LIS runs are shown in Section 4, including a documentation of issues encountered in using the MRMS precipitation dataset. A summary and future work in given in Section 5, followed by acknowledgements and references.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
Time warp operating system version 2.7 internals manual
NASA Technical Reports Server (NTRS)
1992-01-01
The Time Warp Operating System (TWOS) is an implementation of the Time Warp synchronization method proposed by David Jefferson. In addition, it serves as an actual platform for running discrete event simulations. The code comprising TWOS can be divided into several different sections. TWOS typically relies on an existing operating system to furnish some very basic services. This existing operating system is referred to as the Base OS. The existing operating system varies depending on the hardware TWOS is running on. It is Unix on the Sun workstations, Chrysalis or Mach on the Butterfly, and Mercury on the Mark 3 Hypercube. The base OS could be an entirely new operating system, written to meet the special needs of TWOS, but, to this point, existing systems have been used instead. The base OS's used for TWOS on various platforms are not discussed in detail in this manual, as they are well covered in their own manuals. Appendix G discusses the interface between one such OS, Mach, and TWOS.
ChronQC: a quality control monitoring system for clinical next generation sequencing.
Tawari, Nilesh R; Seow, Justine Jia Wen; Perumal, Dharuman; Ow, Jack L; Ang, Shimin; Devasia, Arun George; Ng, Pauline C
2018-05-15
ChronQC is a quality control (QC) tracking system for clinical implementation of next-generation sequencing (NGS). ChronQC generates time series plots for various QC metrics to allow comparison of current runs to historical runs. ChronQC has multiple features for tracking QC data including Westgard rules for clinical validity, laboratory-defined thresholds and historical observations within a specified time period. Users can record their notes and corrective actions directly onto the plots for long-term recordkeeping. ChronQC facilitates regular monitoring of clinical NGS to enable adherence to high quality clinical standards. ChronQC is freely available on GitHub (https://github.com/nilesh-tawari/ChronQC), Docker (https://hub.docker.com/r/nileshtawari/chronqc/) and the Python Package Index. ChronQC is implemented in Python and runs on all common operating systems (Windows, Linux and Mac OS X). tawari.nilesh@gmail.com or pauline.c.ng@gmail.com. Supplementary data are available at Bioinformatics online.
Mo, Shiwei; Chow, Daniel H K
2018-05-19
Motor control, related to running performance and running related injuries, is affected by progression of fatigue during a prolonged run. Distance runners are usually recommended to train at or slightly above anaerobic threshold (AT) speed for improving performance. However, running at AT speed may result in accelerated fatigue. It is not clear how one adapts running gait pattern during a prolonged run at AT speed and if there are differences between runners with different training experience. To compare characteristics of stride-to-stride variability and complexity during a prolonged run at AT speed between novice runners (NR) and experienced runners (ER). Both NR (n = 17) and ER (n = 17) performed a treadmill run for 31 min at his/her AT speed. Stride interval dynamics was obtained throughout the run with the middle 30 min equally divided into six time intervals (denoted as T1, T2, T3, T4, T5 and T6). Mean, coefficient of variation (CV) and scaling exponent alpha of stride intervals were calculated for each interval of each group. This study revealed mean stride interval significantly increased with running time in a non-linear trend (p<0.001). The stride interval variability (CV) maintained relatively constant for NR (p = 0.22) and changed nonlinearly for ER (p = 0.023) throughout the run. Alpha was significantly different between groups at T2, T5 and T6, and nonlinearly changed with running time for both groups with slight differences. These findings provided insights into how the motor control system adapts to progression of fatigue and evidences that long-term training enhances motor control. Although both ER and NR could regulate gait complexity to maintain AT speed throughout the prolonged run, ER also regulated stride interval variability to achieve the goal. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.
2012-04-01
This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.
Couvillon, Margaret J; Riddell Pearce, Fiona C; Harris-Jones, Elisabeth L; Kuepfer, Amanda M; Mackenzie-Smith, Samantha J; Rozario, Laura A; Schürch, Roger; Ratnieks, Francis L W
2012-05-15
Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances.
Couvillon, Margaret J.; Riddell Pearce, Fiona C.; Harris-Jones, Elisabeth L.; Kuepfer, Amanda M.; Mackenzie-Smith, Samantha J.; Rozario, Laura A.; Schürch, Roger; Ratnieks, Francis L. W.
2012-01-01
Summary Noise is universal in information transfer. In animal communication, this presents a challenge not only for intended signal receivers, but also to biologists studying the system. In honey bees, a forager communicates to nestmates the location of an important resource via the waggle dance. This vibrational signal is composed of repeating units (waggle runs) that are then averaged by nestmates to derive a single vector. Manual dance decoding is a powerful tool for studying bee foraging ecology, although the process is time-consuming: a forager may repeat the waggle run 1- >100 times within a dance. It is impractical to decode all of these to obtain the vector; however, intra-dance waggle runs vary, so it is important to decode enough to obtain a good average. Here we examine the variation among waggle runs made by foraging bees to devise a method of dance decoding. The first and last waggle runs within a dance are significantly more variable than the middle run. There was no trend in variation for the middle waggle runs. We recommend that any four consecutive waggle runs, not including the first and last runs, may be decoded, and we show that this methodology is suitable by demonstrating the goodness-of-fit between the decoded vectors from our subsamples with the vectors from the entire dances. PMID:23213438
40 CFR 86.535-90 - Dynamometer procedure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... run consists of two tests, a “cold” start test and a “hot” start test following the “cold” start by 10... Administrator. (d) Practice runs over the prescribed driving schedule may be performed at test points, provided... the proper speed-time relationship, or to permit sampling system adjustments. (e) The drive wheel...
Evaluating Real-Time Platforms for Aircraft Prognostic Health Management Using Hardware-In-The-Loop
2008-08-01
obtained when using HIL and a simulated load. Initially, noticeable differences are seen when comparing the results from each real - time operating system . However...same model in native Simulink. These results show that each real - time operating system can be configured to accurately run transient Simulink
Hawkins, Brian T; Sellgren, Katelyn L; Klem, Ethan J D; Piascik, Jeffrey R; Stoner, Brian R
2017-11-01
Decentralized, energy-efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water- and energy-scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field.
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
Roach, Grahm C.; Edke, Mangesh
2012-01-01
Biomechanical data provide fundamental information about changes in musculoskeletal function during development, adaptation, and disease. To facilitate the study of mouse locomotor biomechanics, we modified a standard mouse running wheel to include a force-sensitive rung capable of measuring the normal and tangential forces applied by individual paws. Force data were collected throughout the night using an automated threshold trigger algorithm that synchronized force data with wheel-angle data and a high-speed infrared video file. During the first night of wheel running, mice reached consistent running speeds within the first 40 force events, indicating a rapid habituation to wheel running, given that mice generated >2,000 force-event files/night. Average running speeds and peak normal and tangential forces were consistent throughout the first four nights of running, indicating that one night of running is sufficient to characterize the locomotor biomechanics of healthy mice. Twelve weeks of wheel running significantly increased spontaneous wheel-running speeds (16 vs. 37 m/min), lowered duty factors (ratio of foot-ground contact time to stride time; 0.71 vs. 0.58), and raised hindlimb peak normal forces (93 vs. 115% body wt) compared with inexperienced mice. Peak normal hindlimb-force magnitudes were the primary force component, which were nearly tenfold greater than peak tangential forces. Peak normal hindlimb forces exceed the vertical forces generated during overground running (50-60% body wt), suggesting that wheel running shifts weight support toward the hindlimbs. This force-instrumented running-wheel system provides a comprehensive, noninvasive screening method for monitoring gait biomechanics in mice during spontaneous locomotion. PMID:22723628
Solving Equations of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Lim, Christopher
2007-01-01
Darts++ is a computer program for solving the equations of motion of a multibody system or of a multibody model of a dynamic system. It is intended especially for use in dynamical simulations performed in designing and analyzing, and developing software for the control of, complex mechanical systems. Darts++ is based on the Spatial-Operator- Algebra formulation for multibody dynamics. This software reads a description of a multibody system from a model data file, then constructs and implements an efficient algorithm that solves the dynamical equations of the system. The efficiency and, hence, the computational speed is sufficient to make Darts++ suitable for use in realtime closed-loop simulations. Darts++ features an object-oriented software architecture that enables reconfiguration of system topology at run time; in contrast, in related prior software, system topology is fixed during initialization. Darts++ provides an interface to scripting languages, including Tcl and Python, that enable the user to configure and interact with simulation objects at run time.
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application
NASA Astrophysics Data System (ADS)
Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.
2011-12-01
EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.
Boonyasit, Yuwadee; Laiwattanapaisal, Wanida
2015-01-01
A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.
Schoenrock, Danielle L; Hartkopf, Katherine; Boeckelman, Carrie
2016-12-01
The development and implementation of a centralized, pharmacist-run population health program were pursued within a health system to increase patient exposure to comprehensive medication reviews (CMRs) and improve visit processes. Program implementation included choosing appropriate pilot pharmacy locations, developing a feasible staffing model, standardizing the workflow, and creating a patient referral process. The impact on patient exposure, specific interventions, and the sustainability of the program were evaluated over a seven-month period. A total of 96 CMRs were scheduled during the data collection period. Attendance at scheduled CMRs was 54% (52 visits); there were 25 cancellations (26%) and 19 no-shows (20%). Since program implementation, there has been more than a twofold increase (2.08) in the number of CMRs completed within the health system. On average, all aspects of each patient visit took 1.78 hours to complete. Pharmacists spent 28% of scheduled time on CMR tasks and 72% of time on telephone calls and technical tasks to maintain appointments. A pharmacist-run CMR program helped to elevate the role of the community pharmacist in a health system and to improve patient exposure to CMRs. Sustaining a centralized CMR program requires support from other members of the health-system team so that pharmacists can spend more time providing patient care and less time on the technical tasks involved. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
Quantifying variability in delta experiments
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. R.; McElroy, B. J.
2017-12-01
Large populations of people and wildlife make their homes on river deltas, therefore it is important to be able to make useful and accurate predictions of how these landforms will change over time. However, making predictions can be a challenge due to inherent variability of the natural system. Furthermore, when we extrapolate results from the laboratory to the field setting, we bring with it random and systematic errors of the experiment. We seek to understand both the intrinsic and experimental variability of river delta systems to help better inform predictions of how these landforms will evolve. We run exact replicates of experiments with steady sediment and water discharge and record delta evolution with overhead time lapse imaging. We measure aspects of topset progradation and channel dynamics and compare these metrics of delta morphology between the 6 replicated experimental runs. We also use data from all experimental runs collectively to build a large dataset to extract statistics of the system properties. We find that although natural variability exists, the processes in the experiments must have outcomes that no longer depend on their initial conditions after some time. Applying these results to the field scale will aid in our ability to make forecasts of how these landforms will progress.
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen
1992-01-01
A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.
2006-06-01
levels of automation applied as per Figure 13. .................................. 60 x THIS PAGE...models generated for this thesis were set to run for 60 minutes. To run the simulation for the set time, the analyst provides a random number seed to...1984). The IMPRINT 59 workload value of 60 has been used by a consensus of workload modeling SMEs to represent the ‘high’ threshold, while the
Damasceno, M V; Pasqua, L A; Lima-Silva, A E; Bertuzzi, R
2015-11-01
This study aimed to verify the association between the contribution of energy systems during an incremental exercise test (IET), pacing, and performance during a 10-km running time trial. Thirteen male recreational runners completed an incremental exercise test on a treadmill to determine the respiratory compensation point (RCP), maximal oxygen uptake (V˙O2max), peak treadmill speed (PTS), and energy systems contribution; and a 10-km running time trial (T10-km) to determine endurance performance. The fractions of the aerobic (WAER) and glycolytic (WGLYCOL) contributions were calculated for each stage based on the oxygen uptake and the oxygen energy equivalents derived by blood lactate accumulation, respectively. Total metabolic demand (WTOTAL) was the sum of these two energy systems. Endurance performance during the T10-km was moderately correlated with RCP, V˙O2max and PTS (P<@0.05), and moderate-to-highly correlated with WAER, WGLYCOL, and WTOTAL (P<0.05). In addition, WAER, WGLYCOL, and WTOTAL were also significantly correlated with running speed in the middle (P<0.01) and final (P<0.01) sections of the T10-km. These findings suggest that the assessment of energy contribution during IET is potentially useful as an alternative variable in the evaluation of endurance runners, especially because of its relationship with specific parts of a long-distance race.
Singh, Nitin Kumar; Bhatia, Akansha; Kazmi, Absar Ahmad
2017-11-01
This study investigated the effect of various intermittent aeration (IA) cycles on organics and nutrient removal, and microbial communities in an integrated fixed-film activated sludge (IFAS) reactor treating municipal waste water. Average effluent biological oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids, total nitrogen (TN) and total phosphorus (TP) values were noted as 20, 50, 30, 12 and 1.5 mgL -1 , respectively, in continuous aeration mode. A total of four operational conditions (run 1, continuous aeration; run 2, 150/30 min aeration on/off time; run 3, 120/60 min aeration on/off time and run 4, 90/60 min aeration on/off time) were investigated in IFAS reactor assessment. Among the all examined IA cycles, IA phase 2 gave the maximum COD and BOD removals with values recorded as 97% and 93.8%, respectively. With respect to nutrient removal (TN and TP), IA phase 1 was found to be optimum. Pathogen removal efficiency of present system was recorded as 90-95% during the three phases. With regard to settling characteristics, pilot showed poor settling during IA schedules, which was also evidenced by high sludge volume index values. Overall, IA could be used as a feasible way to improve the overall performance of IFAS system.
Safety management for polluted confined space with IT system: a running case.
Hwang, Jing-Jang; Wu, Chien-Hsing; Zhuang, Zheng-Yun; Hsu, Yi-Chang
2015-01-01
This study traced a deployed real IT system to enhance occupational safety for a polluted confined space. By incorporating wireless technology, it automatically monitors the status of workers on the site and upon detected anomalous events, managers are notified effectively. The system, with a redefined standard operations process, is running well at one of Formosa Petrochemical Corporation's refineries. Evidence shows that after deployment, the system does enhance the safety level by real-time monitoring the workers and by managing well and controlling the anomalies. Therefore, such technical architecture can be applied to similar scenarios for safety enhancement purposes.
A faster technique for rendering meshes in multiple display systems
NASA Astrophysics Data System (ADS)
Hand, Randall E.; Moorhead, Robert J., II
2003-05-01
Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.
MS-BWME: A Wireless Real-Time Monitoring System for Brine Well Mining Equipment
Xiao, Xinqing; Zhu, Tianyu; Qi, Lin; Moga, Liliana Mihaela; Zhang, Xiaoshuan
2014-01-01
This paper describes a wireless real-time monitoring system (MS-BWME) to monitor the running state of pumps equipment in brine well mining and prevent potential failures that may produce unexpected interruptions with severe consequences. MS-BWME consists of two units: the ZigBee Wireless Sensors Network (WSN) unit and the real-time remote monitoring unit. MS-BWME was implemented and tested in sampled brine wells mining in Qinghai Province and four kinds of indicators were selected to evaluate the performance of the MS-BWME, i.e., sensor calibration, the system's real-time data reception, Received Signal Strength Indicator (RSSI) and sensor node lifetime. The results show that MS-BWME can accurately judge the running state of the pump equipment by acquiring and transmitting the real-time voltage and electric current data of the equipment from the spot and provide real-time decision support aid to help workers overhaul the equipment in a timely manner and resolve failures that might produce unexpected production down-time. The MS-BWME can also be extended to a wide range of equipment monitoring applications. PMID:25340455
New operator assistance features in the CMS Run Control System
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
2017-10-01
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
New Operator Assistance Features in the CMS Run Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G
1999-01-01
An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.
Run-time implementation issues for real-time embedded Ada
NASA Technical Reports Server (NTRS)
Maule, Ruth A.
1986-01-01
A motivating factor in the development of Ada as the department of defense standard language was the high cost of embedded system software development. It was with embedded system requirements in mind that many of the features of the language were incorporated. Yet it is the designers of embedded systems that seem to comprise the majority of the Ada community dissatisfied with the language. There are a variety of reasons for this dissatisfaction, but many seem to be related in some way to the Ada run-time support system. Some of the areas in which the inconsistencies were found to have the greatest impact on performance from the standpoint of real-time systems are presented. In particular, a large part of the duties of the tasking supervisor are subject to the design decisions of the implementer. These include scheduling, rendezvous, delay processing, and task activation and termination. Some of the more general issues presented include time and space efficiencies, generic expansions, memory management, pragmas, and tracing features. As validated compilers become available for bare computer targets, it is important for a designer to be aware that, at least for many real-time issues, all validated Ada compilers are not created equal.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Modeling a maintenance simulation of the geosynchronous platform
NASA Technical Reports Server (NTRS)
Kleiner, A. F., Jr.
1980-01-01
A modeling technique used to conduct a simulation study comparing various maintenance routines for a space platform is dicussed. A system model is described and illustrated, the basic concepts of a simulation pass are detailed, and sections on failures and maintenance are included. The operation of the system across time is best modeled by a discrete event approach with two basic events - failure and maintenance of the system. Each overall simulation run consists of introducing a particular model of the physical system, together with a maintenance policy, demand function, and mission lifetime. The system is then run through many passes, each pass corresponding to one mission and the model is re-initialized before each pass. Statistics are compiled at the end of each pass and after the last pass a report is printed. Items of interest typically include the time to first maintenance, total number of maintenance trips for each pass, average capability of the system, etc.
Impact of MODIS High-Resolution Sea-Surface Temperatures on WRF Forecasts at NWS Miami, FL
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; LaCasse, Katherine M.; Dembek, Scott R.; Santos, Pablo; Lapenta, William M.
2007-01-01
Over the past few years,studies at the Short-term Prediction Research and Transition (SPoRT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) composite sea-surface temperature (SST) products in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. The recent paper by LaCasse et al. (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPoRT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The scientific hypothesis being tested is: More accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running the WRF system in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software; The EMS is a standalone modeling system capable of downloading the necessary daily datasets, and initializing, running and displaying WRF forecasts in the NWS Advanced Weather Interactive Processing System (AWIPS) with little intervention required by forecasters. Twenty-seven hour forecasts are run daily with start times of 0300,0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and the far western portions of the Bahamas, the Florida Keys, the Straights of Florida, and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS, invoking the diabatic. "hot-start" capability. In this WRF model "hot-start", the LAPS-analyzed cloud and precipitation features are converted into model microphysics fields with enhanced vertical velocity profiles, effectively reducing the model spin-up time required to predict precipitation systems. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at l/12 degree resolution (approx. 9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPoRT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA in every respect except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. The MODIS SST composites for initializing the SPoRT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST composites into the SPoRTWRF runs is staggered such that the 0400UTC composite initializes the 0900 UTC WRF, the 0700 UTC composite initializes the 1500 UTC WRF, the 1600 UTC composite initializes the 2100 UTC WRF, and the 1900 UTC composite initializes the 0300 UTC WRF. A comparison of the SPoRT and Miami forecasts is underway in 2007, and includes quantitative verification of near-surface temperature, dewpoint, and wind forecasts at surface observation locations. In addition, particular days of interest are being analyzed to determine the impact of the MODIS SST data on the development and evolution of predicted sea/land-breeze circulations, clouds, and precipitation. This paper will present verification results comparing the NWS MIA forecasts the SPoRT experimental WRF forecasts, and highlight any substantial differences noted in the predicted mesoscale phenomena.
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
Searches for all types of binary mergers in the first Advanced LIGO observing run
NASA Astrophysics Data System (ADS)
Read, Jocelyn
2017-01-01
The first observational run of the Advanced LIGO detectors covered September 12, 2015 to January 19, 2016. In that time, two definitive observations of merging binary black hole systems were made. In particular, the second observation, GW151226, relied on matched-filter searches targeting merging binaries. These searches were also capable of detecting binary mergers from binary neutron stars and from black-hole/neutron-star binaries. In this talk, I will give an overview of LIGO compact binary coalescence searches, in particular focusing on systems that contain neutron stars. I will discuss the sensitive volumes of the first observing run, the astrophysical implications of detections and non-detections, and prospects for future observations
Run Environment and Data Management for Earth System Models
NASA Astrophysics Data System (ADS)
Widmann, H.; Lautenschlager, M.; Fast, I.; Legutke, S.
2009-04-01
The Integrating Model and Data Infrastructure (IMDI) developed and maintained by the Model and Data Group (M&D) comprises the Standard Compile Environment (SCE) and the Standard Run Environment (SRE). The IMDI software has a modular design, which allows to combine and couple a suite of model components and as well to execute the tasks independently and on various platforms. Furthermore the modular structure enables the extension to new model combinations and new platforms. The SRE presented here enables the configuration and performance of earth system model experiments from model integration up to storage and visualization of data. We focus on recently implemented tasks such as synchronous data base filling, graphical monitoring and automatic generation of meta data in XML forms during run time. As well we address the capability to run experiments in heterogeneous IT environments with different computing systems for model integration, data processing and storage. These features are demonstrated for model configurations and on platforms used in current or upcoming projects, e.g. MILLENNIUM or IPCC AR5.
Evaluation of bus transit reliability in the District of Columbia.
DOT National Transportation Integrated Search
2013-11-01
Several performance metrics can be used to assess the reliability of a transit system. These include on-time arrivals, travel-time : adherence, run-time adherence, and customer satisfaction, among others. On-time arrival at bus stops is one of the pe...
Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.
Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C
2017-03-01
Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.
File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.-S.
1987-01-01
A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage - A measurement-based study on UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1989-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage: A measurement-based study of UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1987-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
Study on real-time elevator brake failure predictive system
NASA Astrophysics Data System (ADS)
Guo, Jun; Fan, Jinwei
2013-10-01
This paper presented a real-time failure predictive system of the elevator brake. Through inspecting the running state of the coil by a high precision long range laser triangulation non-contact measurement sensor, the displacement curve of the coil is gathered without interfering the original system. By analyzing the displacement data using the diagnostic algorithm, the hidden danger of the brake system can be discovered in time and thus avoid the according accident.
Loading forces in shallow water running in two levels of immersion.
Haupenthal, Alessandro; Ruschel, Caroline; Hubert, Marcel; de Brito Fontana, Heiliane; Roesler, Helio
2010-07-01
To analyse the vertical and anteroposterior components of the ground reaction force during shallow water running at 2 levels of immersion. Twenty-two healthy adults with no gait disorders, who were familiar with aquatic exercises. Subjects performed 6 trials of water running at a self-selected speed in chest and hip immersion. Force data were collected through an underwater force plate and running speed was measured with a photocell timing light system. Analysis of covariance was used for data analysis. Vertical forces corresponded to 0.80 and 0.98 times the subject's body weight at the chest and hip level, respectively. Anteroposterior forces corresponded to 0.26 and 0.31 times the subject's body weight at the chest and hip level, respectively. As the water level decreased the subjects ran faster. No significant differences were found for the force values between the immersions, probably due to variability in speed, which was self-selected. When thinking about load values in water running professionals should consider not only the immersion level, but also the speed, as it can affect the force components, mainly the anteroposterior one. Quantitative data on this subject could help professionals to conduct safer aqua-tic rehabilitation and physical conditioning protocols.
Improving Reliability in a Stochastic Communication Network
1990-12-01
and GINO. In addition, the following computers were used: a Sun 386i workstation, a Digital Equipment Corporation (DEC) 11/785 miniframe , and a DEC...operating system. The DEC 11/785 miniframe used in the experiment was running Unix Version 4.3 (Berkley System Domain). Maxflo was run on the DEC 11/785...the file was still called Mod- ifyl.for). 4. The Maxflo program was started on the DEC 11/785 miniframe . 5. At this time the Convert.max file, created
AFTER: Batch jobs on the Apollo ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofstadler, P.
1987-07-01
This document describes AFTER, a system that allows users of an Apollo ring to submit batch jobs to run without leaving themselves logged in to the ring. Jobs may be submitted to run at a later time or on a different node. Results from the batch job are mailed to the user through some designated mail system. AFTER features an understandable user interface, good on line help, and site customization. This manual serves primarily as a user's guide to AFTER although administration and installation are covered for completeness.
NASA Technical Reports Server (NTRS)
Parra, Macarena; Jung, Jimmy; Almeida, Eduardo; Boone, Travis; Schonfeld, Julie; Tran, Luan
2016-01-01
The WetLab-2 system was developed by NASA Ames Research Center to offer new capabilities to researchers. The system can lyse cells and extract RNA (Ribonucleic Acid) on-orbit from different sample types ranging from microbial cultures to animal tissues. The purified RNA can then either be stabilized for return to Earth or can be used to conduct on-orbit quantitative Reverse Transcriptase PCR (Polymerase Chain Reaction) (qRT-PCR) analysis without the need for sample return. The qRT-PCR results can be downlinked to the ground a few hours after the completion of the run. The validation flight of the WetLab-2 system launched on SpaceX-8 on April 8, 2016. On orbit operations started on April 15th with system setup and was followed by three quantitative PCR runs using an E. coli genomic DNA template pre-loaded at three different concentrations. These runs were designed to discern if quantitative PCR functions correctly in microgravity and if the data is comparable to that from the ground control runs. The flight data showed no significant differences compared to the ground data though there was more variability in the values, this was likely due to the numerous small bubbles observed. The capability of the system to process samples and purify RNA was then validated using frozen samples prepared on the ground. The flight data for both E. coli and mouse liver clearly shows that RNA was successfully purified by our system. The E. coli qRT-PCR run showed successful singleplex, duplex and triplex capability. Data showed high variability in the resulting Cts (Cycle Thresholds [for the PCR]) likely due to bubble formation and insufficient mixing during the procedure run. The mouse liver qRT-PCR run had successful singleplex and duplex reactions and the variability was slightly better as the mixing operation was improved. The ability to purify and stabilize RNA and to conduct qRT-PCR on-orbit is an important step towards utilizing the ISS as a National Laboratory facility. The ability to get on-orbit data will provide investigators with the opportunity to adjust experimental parameters in real time without the need for sample return and re-flight. The WetLab-2 Project is supported by the Research Integration Office in the ISS Program.
An Innovative Running Wheel-based Mechanism for Improved Rat Training Performance.
Chen, Chi-Chun; Yang, Chin-Lung; Chang, Ching-Ping
2016-09-19
This study presents an animal mobility system, equipped with a positioning running wheel (PRW), as a way to quantify the efficacy of an exercise activity for reducing the severity of the effects of the stroke in rats. This system provides more effective animal exercise training than commercially available systems such as treadmills and motorized running wheels (MRWs). In contrast to an MRW that can only achieve speeds below 20 m/min, rats are permitted to run at a stable speed of 30 m/min on a more spacious and high-density rubber running track supported by a 15 cm wide acrylic wheel with a diameter of 55 cm in this work. Using a predefined adaptive acceleration curve, the system not only reduces the operator error but also trains the rats to run persistently until a specified intensity is reached. As a way to evaluate the exercise effectiveness, real-time position of a rat is detected by four pairs of infrared sensors deployed on the running wheel. Once an adaptive acceleration curve is initiated using a microcontroller, the data obtained by the infrared sensors are automatically recorded and analyzed in a computer. For comparison purposes, 3 week training is conducted on rats using a treadmill, an MRW and a PRW. After surgically inducing middle cerebral artery occlusion (MCAo), modified neurological severity scores (mNSS) and an inclined plane test were conducted to assess the neurological damages to the rats. PRW is experimentally validated as the most effective among such animal mobility systems. Furthermore, an exercise effectiveness measure, based on rat position analysis, showed that there is a high negative correlation between the effective exercise and the infarct volume, and can be employed to quantify a rat training in any type of brain damage reduction experiments.
Improved infrared-sensing running wheel systems with an effective exercise activity indicator.
Chen, Chi-Chun; Chang, Ming-Wen; Chang, Ching-Ping; Chang, Wen-Ying; Chang, Shin-Chieh; Lin, Mao-Tsun; Yang, Chin-Lung
2015-01-01
This paper describes an infrared-sensing running wheel (ISRW) system for the quantitative measurement of effective exercise activity in rats. The ISRW system provides superior exercise training compared with commercially available traditional animal running platforms. Four infrared (IR) light-emitting diode/detector pairs embedded around the rim of the wheel detect the rat's real-time position; the acrylic wheel has a diameter of 55 cm and a thickness of 15 cm, that is, it is larger and thicker than traditional exercise wheels, and it is equipped with a rubber track. The acrylic wheel hangs virtually frictionless, and a DC motor with an axially mounted rubber wheel, which has a diameter of 10 cm, drives the acrylic wheel from the outer edge. The system can automatically train rats to run persistently. The proposed system can determine effective exercise activity (EEA), with the IR sensors (which are connected to a conventional PC) recording the rat exercise behavior. A prototype of the system was verified by a hospital research group performing ischemic stroke experiments on rats by considering middle cerebral artery occlusion. The experimental data demonstrated that the proposed system provides greater neuroprotection in an animal stroke model compared with a conventional treadmill and a motorized running wheel for a given exercise intensity. The quantitative exercise effectiveness indicator showed a 92% correlation between an increase in the EEA and a decrease in the infarct volume. This indicator can be used as a noninvasive and objective reference in clinical animal exercise experiments.
The application of connectionism to query planning/scheduling in intelligent user interfaces
NASA Technical Reports Server (NTRS)
Short, Nicholas, Jr.; Shastri, Lokendra
1990-01-01
In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.
A Core Plug and Play Architecture for Reusable Flight Software Systems
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan
2006-01-01
The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.
The Web Based Monitoring Project at the CMS Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf
The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To the end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters,more » including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).« less
The web based monitoring project at the CMS experiment
NASA Astrophysics Data System (ADS)
Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf; Chakaberia, Irakli; Jo, Youngkwon; Maeshima, Kaori; Maruyama, Sho; Patrick, James; Rapsevicius, Valdas; Soha, Aron; Stankevicius, Mantas; Sulmanas, Balys; Toda, Sachiko; Wan, Zongru
2017-10-01
The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To that end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
A UNIX SVR4-OS 9 distributed data acquisition for high energy physics
NASA Astrophysics Data System (ADS)
Drouhin, F.; Schwaller, B.; Fontaine, J. C.; Charles, F.; Pallares, A.; Huss, D.
1998-08-01
The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMS tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The UNIX system manages a list of OS9 front-end systems with a synchronisation protocol running over a TCP/IP layer.
Dynamic analysis methods for detecting anomalies in asynchronously interacting systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Akshat; Solis, John Hector; Matschke, Benjamin
2014-01-01
Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less
Factors That Influence Running Intensity in Interchange Players in Professional Rugby League.
Delaney, Jace A; Thornton, Heidi R; Duthie, Grant M; Dascombe, Ben J
2016-11-01
Rugby league coaches adopt replacement strategies for their interchange players to maximize running intensity; however, it is important to understand the factors that may influence match performance. To assess the independent factors affecting running intensity sustained by interchange players during professional rugby league. Global positioning system (GPS) data were collected from all interchanged players (starters and nonstarters) in a professional rugby league squad across 24 matches of a National Rugby League season. A multilevel mixed-model approach was employed to establish the effect of various technical (attacking and defensive involvements), temporal (bout duration, time in possession, etc), and situational (season phase, recovery cycle, etc) factors on the relative distance covered and average metabolic power (P met ) during competition. Significant effects were standardized using correlation coefficients, and the likelihood of the effect was described using magnitude-based inferences. Superior intermittent running ability resulted in very likely large increases in both relative distance and P met . As the length of a bout increased, both measures of running intensity exhibited a small decrease. There were at least likely small increases in running intensity for matches played after short recovery cycles and against strong opposition. During a bout, the number of collision-based involvements increased running intensity, whereas time in possession and ball time out of play decreased demands. These data demonstrate a complex interaction of individual- and match-based factors that require consideration when developing interchange strategies, and the manipulation of training loads during shorter recovery periods and against stronger opponents may be beneficial.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard
2017-11-06
Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.
Transient Turbine Engine Modeling with Hardware-in-the-Loop Power Extraction (PREPRINT)
2008-07-01
Furthermore, it must be compatible with a real - time operating system that is capable of running the simulation. For some models, especially those that use...problem of interfacing the engine/control model to a real - time operating system and associated lab hardware becomes a problem of interfacing these...model in real-time. This requires the use of a real - time operating system and a compatible I/O (input/output) board. Figure 1 illustrates the HIL
Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm
NASA Astrophysics Data System (ADS)
Hardi, S. M.; Tarigan, J. T.; Safrina, N.
2018-03-01
In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.
Vulnerability Model. A Simulation System for Assessing Damage Resulting from Marine Spills
1975-06-01
used and the scenario simulated. The test runs were made on an IBM 360/65 computer. Running times were generally between 15 and 35 CPU seconds...fect filrthcr north. A petroleum tank-truck operation was located within 600 feet Of L:- stock pond on which the crude oil had dammred itp . At 5 A-M
ERIC Educational Resources Information Center
Kirp, David L.
2014-01-01
For years, points out David L. Kirp, critics have lambasted public schools as fossilized bureaucracies run by paper-pushers and filled with time-serving teachers preoccupied with their job security, not the lives of their students. Yet, as this article describes, running an exemplary school system does not demand heroes or heroics, just hard and…
Recent Upgrades to NASA SPoRT Initialization Datasets for the Environmental Modeling System
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Lafontaine, Frank J.; Molthan, Andrew L.; Zavodsky, Bradley T.; Rozumalski, Robert A.
2012-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed several products for its NOAA/National Weather Service (NWS) partners that can initialize specific fields for local model runs within the NOAA/NWS Science and Training Resource Center Environmental Modeling System (EMS). The suite of SPoRT products for use in the EMS consists of a Sea Surface Temperature (SST) composite that includes a Lake Surface Temperature (LST) analysis over the Great Lakes, a Great Lakes sea-ice extent within the SST composite, a real-time Green Vegetation Fraction (GVF) composite, and NASA Land Information System (LIS) gridded output. This paper and companion poster describe each dataset and provide recent upgrades made to the SST, Great Lakes LST, GVF composites, and the real-time LIS runs.
Collecting Response Times using Amazon Mechanical Turk and Adobe Flash
Simcox, Travis; Fiez, Julie A.
2017-01-01
Crowdsourcing systems like Amazon's Mechanical Turk (AMT) allow data to be collected from a large sample of people in a short amount of time. This use has garnered considerable interest from behavioral scientists. So far, most experiments conducted on AMT have focused on survey-type instruments because of difficulties inherent in running many experimental paradigms over the Internet. This article investigated the viability of presenting stimuli and collecting response times using Adobe Flash to run ActionScript 3 code in conjunction with AMT. First, the timing properties of Adobe Flash were investigated using a phototransistor and two desktop computers running under several conditions mimicking those that may be present in research using AMT. This experiment revealed some strengths and weaknesses of the timing capabilities of this method. Next, a flanker task and a lexical decision task implemented in Adobe Flash were administered to participants recruited with AMT. The expected effects in these tasks were replicated. Power analyses were conducted to describe the number of participants needed to replicate these effects. A questionnaire was used to investigate previously undescribed computer use habits of 100 participants on AMT. We conclude that a Flash program in conjunction with AMT can be successfully used for running many experimental paradigms that rely on response times, although experimenters must understand the limitations of the method. PMID:23670340
ERIC Educational Resources Information Center
Soares, Andrey
2009-01-01
This research targeted the area of Ontology-Driven Information Systems, where ontology plays a central role both at development time and at run time of Information Systems (IS). In particular, the research focused on the process of building domain ontologies for IS modeling. The motivation behind the research was the fact that researchers have…
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
Tcl as a Software Environment for a TCS
NASA Astrophysics Data System (ADS)
Terrett, David L.
2002-12-01
This paper describes how the Tcl scripting language and C API has been used as the software environment for a telescope pointing kernel so that new pointing algorithms and software architectures can be developed and tested without needing a real-time operating system or real-time software environment. It has enabled development to continue outside the framework of a specific telescope project while continuing to build a system that is sufficiently complete to be capable of controlling real hardware but expending minimum effort on replacing the services that would normally by provided by a real-time software environment. Tcl is used as a scripting language for configuring the system at startup and then as the command interface for controlling the running system; the Tcl C language API is used to provided a system independent interface to file and socket I/O and other operating system services. The pointing algorithms themselves are implemented as a set of C++ objects calling C library functions that implement the algorithms described in [2]. Although originally designed as a test and development environment, the system, running as a soft real-time process on Linux, has been used to test the SOAR mount control system and will be used as the pointing kernel of the SOAR telescope control system
NSTX-U Advances in Real-Time C++11 on Linux
NASA Astrophysics Data System (ADS)
Erickson, Keith G.
2015-08-01
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.
Sellgren, Katelyn L.; Klem, Ethan J. D.; Piascik, Jeffrey R.; Stoner, Brian R.
2017-01-01
Abstract Decentralized, energy‐efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water‐ and energy‐scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field. PMID:29242713
Certification Strategies using Run-Time Safety Assurance for Part 23 Autopilot Systems
NASA Technical Reports Server (NTRS)
Hook, Loyd R.; Clark, Matthew; Sizoo, David; Skoog, Mark A.; Brady, James
2016-01-01
Part 23 aircraft operation, and in particular general aviation, is relatively unsafe when compared to other common forms of vehicle travel. Currently, there exists technologies that could increase safety statistics for these aircraft; however, the high burden and cost of performing the requisite safety critical certification processes for these systems limits their proliferation. For this reason, many entities, including the Federal Aviation Administration, NASA, and the US Air Force, are considering new options for certification for technologies that will improve aircraft safety. Of particular interest, are low cost autopilot systems for general aviation aircraft, as these systems have the potential to positively and significantly affect safety statistics. This paper proposes new systems and techniques, leveraging run-time verification, for the assurance of general aviation autopilot systems, which would be used to supplement the current certification process and provide a viable path for near-term low-cost implementation. In addition, discussions on preliminary experimentation and building the assurance case for a system, based on these principles, is provided.
HAL/S-FC compiler system functional specification
NASA Technical Reports Server (NTRS)
1974-01-01
Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.
UNificatins and Extensions of the Multiple Access Communications Problem,
1981-07-01
Control , Stability and Waiting Time in a Slotted ALOHA Random Access System ," IEEE...quceing, them, the control procedure must tolerate a larger average number of’ messages in the system if it is to limit the number of times that the system ...running fas- ter than real time to provide some flow control for that class . The virtual clocks for every other class merely act as a "gate" which
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
PC graphics generation and management tool for real-time applications
NASA Technical Reports Server (NTRS)
Truong, Long V.
1992-01-01
A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.
Simulation of linear mechanical systems
NASA Technical Reports Server (NTRS)
Sirlin, S. W.
1993-01-01
A dynamics and controls analyst is typically presented with a structural dynamics model and must perform various input/output tests and design control laws. The required time/frequency simulations need to be done many times as models change and control designs evolve. This paper examines some simple ways that open and closed loop frequency and time domain simulations can be done using the special structure of the system equations usually available. Routines were developed to run under Pro-Matlab in a mixture of the Pro-Matlab interpreter and FORTRAN (using the .mex facility). These routines are often orders of magnitude faster than trying the typical 'brute force' approach of using built-in Pro-Matlab routines such as bode. This makes the analyst's job easier since not only does an individual run take less time, but much larger models can be attacked, often allowing the whole model reduction step to be eliminated.
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Mizinski, Bartlomiej; Swierczynska-Chlasciak, Malgorzata
2017-04-01
The HydroProg system, the real-time multimodel hydrologic ensemble system elaborated at the University of Wroclaw (Poland) in frame of the research grant no. 2011/01/D/ST10/04171 financed by National Science Centre of Poland, has been experimentally launched in 2013 in the Nysa Klodzka river basin (southwestern Poland). Since that time the system has been working operationally to provide water level predictions in real time. At present, depending on a hydrologic gauge, up to eight hydrologic models are run. They are data- and physically-based solutions, with the majority of them being the data-based ones. The paper aims to report on the performance of the implementation of the HydroProg system for the basin in question. We focus on several high flows episodes and discuss the skills of the individual models in forecasting them. In addition, we present the performance of the multimodel ensemble solution. We also introduce a new prognosis which is determined in the following way: for a given lead time we select the most skillful prediction (from the set of all individual models running at a given gauge and their multimodel ensemble) using the performance statistics computed operationally in real time as a function of lead time.
Yan, Xuedong; Liu, Yang; Xu, Yongcun
2015-01-01
Drivers' incorrect decisions of crossing signalized intersections at the onset of the yellow change may lead to red light running (RLR), and RLR crashes result in substantial numbers of severe injuries and property damage. In recent years, some Intelligent Transport System (ITS) concepts have focused on reducing RLR by alerting drivers that they are about to violate the signal. The objective of this study is to conduct an experimental investigation on the effectiveness of the red light violation warning system using a voice message. In this study, the prototype concept of the RLR audio warning system was modeled and tested in a high-fidelity driving simulator. According to the concept, when a vehicle is approaching an intersection at the onset of yellow and the time to the intersection is longer than the yellow interval, the in-vehicle warning system can activate the following audio message "The red light is impending. Please decelerate!" The intent of the warning design is to encourage drivers who cannot clear an intersection during the yellow change interval to stop at the intersection. The experimental results showed that the warning message could decrease red light running violations by 84.3 percent. Based on the logistic regression analyses, drivers without a warning were about 86 times more likely to make go decisions at the onset of yellow and about 15 times more likely to run red lights than those with a warning. Additionally, it was found that the audio warning message could significantly reduce RLR severity because the RLR drivers' red-entry times without a warning were longer than those with a warning. This driving simulator study showed a promising effect of the audio in-vehicle warning message on reducing RLR violations and crashes. It is worthwhile to further develop the proposed technology in field applications.
NASA Astrophysics Data System (ADS)
Kramer, J. L. A. M.; Ullings, A. H.; Vis, R. D.
1993-05-01
A real-time data acquisition system for microprobe analysis has been developed at the Free University of Amsterdam. The system is composed of two parts: a front-end real-time and a back-end monitoring system. The front-end consists of a VMEbus based system which reads out a CAMAC crate. The back-end is implemented on a Sun work station running the UNIX operating system. This separation allows the integration of a minimal, and consequently very fast, real-time executive within the sophisticated possibilities of advanced UNIX work stations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The mobile PV/generator hybrid system deployed at Bechler Meadows provides a number of advantages. It reduces on-site air emissions from the generator. Batteries allow the generator to operate only at its rated power, reducing run-time and fuel consumption. Energy provided by the solar array reduces fuel consumption and run-time of the generator. The generator is off for most hours providing peace and quiet at the site. Maintenance trips from Mammoth Hot Springs to the remote site are reduced. The frequency of intrusive fuel deliveries to the pristine site is reduced. And the system gives rangers a chance to interpret Greenmore » Park values to the visiting public. As an added bonus, the system provides all these benefits at a lower cost than the basecase of using only a propane-fueled generator, reducing life cycle cost by about 26%.« less
Case Study: Mobile Photovoltaic System at Bechler Meadows Ranger Station, Yellowstone National Park
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andy Walker
The mobile PV/generator hybrid system deployed at Bechler Meadows provides a number of advantages. It reduces on-site air emissions from the generator. Batteries allow the generator to operate only at its rated power, reducing run-time and fuel consumption. Energy provided by the solar array reduces fuel consumption and run-time of the generator. The generator is off for most hours providing peace and quiet at the site. Maintenance trips from Mammoth Hot Springs to the remote site are reduced. The frequency of intrusive fuel deliveries to the pristine site is reduced. And the system gives rangers a chance to interpret Greenmore » Park values to the visiting public. As an added bonus, the system provides all these benefits at a lower cost than the basecase of using only a propane-fueled generator, reducing life cycle cost by about 26%.« less
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
Real-time Avatar Animation from a Single Image.
Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F
2011-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.
Real-time Avatar Animation from a Single Image
Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.
2014-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Very high cell density perfusion of CHO cells anchored in a non-woven matrix-based bioreactor.
Zhang, Ye; Stobbe, Per; Silvander, Christian Orrego; Chotteau, Véronique
2015-11-10
Recombinant Chinese Hamster Ovary (CHO) cells producing IgG monoclonal antibody were cultivated in a novel perfusion culture system CellTank, integrating the bioreactor and the cell retention function. In this system, the cells were harbored in a non-woven polyester matrix perfused by the culture medium and immersed in a reservoir. Although adapted to suspension, the CHO cells stayed entrapped in the matrix. The cell-free medium was efficiently circulated from the reservoir into- and through the matrix by a centrifugal pump placed at the bottom of the bioreactor resulting in highly homogenous concentrations of the nutrients and metabolites in the whole system as confirmed by measurements from different sampling locations. A real-time biomass sensor using the dielectric properties of living cells was used to measure the cell density. The performances of the CellTank were studied in three perfusion runs. A very high cell density measured as 200 pF/cm (where 1 pF/cm is equivalent to 1 × 10(6)viable cells/mL) was achieved at a perfusion rate of 10 reactor volumes per day (RV/day) in the first run. In the second run, the effect of cell growth arrest by hypothermia at temperatures lowered gradually from 37 °C to 29 °C was studied during 13 days at cell densities above 100 pF/cm. Finally a production run was performed at high cell densities, where a temperature shift to 31 °C was applied at cell density 100 pF/cm during a production period of 14 days in minimized feeding conditions. The IgG concentrations were comparable in the matrix and in the harvest line in all the runs, indicating no retention of the product of interest. The cell specific productivity was comparable or higher than in Erlenmeyer flask batch culture. During the production run, the final harvested IgG production was 35 times higher in the CellTank compared to a repeated batch culture in the same vessel volume during the same time period. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Multi-Sensor Information Integration and Automatic Understanding
2008-11-01
also produced a real-time implementation of the tracking and anomalous behavior detection system that runs on real- world data – either using real-time...surveillance and airborne IED detection . 15. SUBJECT TERMS Multi-hypothesis tracking , particle filters, anomalous behavior detection , Bayesian...analyst to support decision making with large data sets. A key feature of the real-time tracking and behavior detection system developed is that the
Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running.
de Ruiter, Cornelis J; van Oeveren, Ben; Francke, Agnieta; Zijlstra, Patrick; van Dieen, Jaap H
2016-01-01
The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby unclear. The present study investigates the use of foot contact time (CT) for running speed estimations, which potentially can be used in addition to the global positioning system (GPS) in situations where GPS performance is limited. CT was measured with tri axial inertial sensors attached to the feet of 14 runners, during natural over ground outdoor running, under optimized conditions for GPS. The individual relationships between running speed and CT were established during short runs at different speeds on two days. These relations were subsequently used to predict instantaneous speed during a straight line 4 km run with a single turning point halfway. Stopwatch derived speed, measured for each of 32 consecutive 125m intervals during the 4 km runs, was used as reference. Individual speed-CT relations were strong (r2 >0.96 for all trials) and consistent between days. During the 4km runs, median error (ranges) in predicted speed from CT 2.5% (5.2) was higher (P<0.05) than for GPS 1.6% (0.8). However, around the turning point and during the first and last 125m interval, error for GPS-speed increased to 5.0% (4.5) and became greater (P<0.05) than the error predicted from CT: 2.7% (4.4). Small speed fluctuations during 4km runs were adequately monitored with both methods: CT and GPS respectively explained 85% and 73% of the total speed variance during 4km runs. In conclusion, running speed estimates bases on speed-CT relations, have acceptable accuracy and could serve to backup or substitute for GPS during tarmac running on flat terrain whenever GPS performance is limited.
A novel process control method for a TT-300 E-Beam/X-Ray system
NASA Astrophysics Data System (ADS)
Mittendorfer, Josef; Gallnböck-Wagner, Bernhard
2018-02-01
This paper presents some aspects of the process control method for a TT-300 E-Beam/X-Ray system at Mediscan, Austria. The novelty of the approach is the seamless integration of routine monitoring dosimetry with process data. This allows to calculate a parametric dose for each production unit and consequently a fine grain and holistic process performance monitoring. Process performance is documented in process control charts for the analysis of individual runs as well as historic trending of runs of specific process categories over a specified time range.
Tethered satellite system dynamics and control review panel and related activities, phase 3
NASA Technical Reports Server (NTRS)
1991-01-01
Two major tests of the Tethered Satellite System (TSS) engineering and flight units were conducted to demonstrate the functionality of the hardware and software. Deficiencies in the hardware/software integration tests (HSIT) led to a recommendation for more testing to be performed. Selected problem areas of tether dynamics were analyzed, including verification of the severity of skip rope oscillations, verification or comparison runs to explore dynamic phenomena observed in other simulations, and data generation runs to explore the performance of the time domain and frequency domain skip rope observers.
A Unix SVR-4-OS9 distributed data acquisition for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouhin, F.; Schwaller, B.; Fontaine, J.C.
1998-08-01
The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMs tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The Unix system manages a list of OS9 front-end systems with a synchronization protocolmore » running over a TCP/IP layer.« less
Australia's marine virtual laboratory
NASA Astrophysics Data System (ADS)
Proctor, Roger; Gillibrand, Philip; Oke, Peter; Rosebrock, Uwe
2014-05-01
In all modelling studies of realistic scenarios, a researcher has to go through a number of steps to set up a model in order to produce a model simulation of value. The steps are generally the same, independent of the modelling system chosen. These steps include determining the time and space scales and processes of the required simulation; obtaining data for the initial set up and for input during the simulation time; obtaining observation data for validation or data assimilation; implementing scripts to run the simulation(s); and running utilities or custom-built software to extract results. These steps are time consuming and resource hungry, and have to be done every time irrespective of the simulation - the more complex the processes, the more effort is required to set up the simulation. The Australian Marine Virtual Laboratory (MARVL) is a new development in modelling frameworks for researchers in Australia. MARVL uses the TRIKE framework, a java-based control system developed by CSIRO that allows a non-specialist user configure and run a model, to automate many of the modelling preparation steps needed to bring the researcher faster to the stage of simulation and analysis. The tool is seen as enhancing the efficiency of researchers and marine managers, and is being considered as an educational aid in teaching. In MARVL we are developing a web-based open source application which provides a number of model choices and provides search and recovery of relevant observations, allowing researchers to: a) efficiently configure a range of different community ocean and wave models for any region, for any historical time period, with model specifications of their choice, through a user-friendly web application, b) access data sets to force a model and nest a model into, c) discover and assemble ocean observations from the Australian Ocean Data Network (AODN, http://portal.aodn.org.au/webportal/) in a format that is suitable for model evaluation or data assimilation, and d) run the assembled configuration in a cloud computing environment, or download the assembled configuration and packaged data to run on any other system of the user's choice. MARVL is now being applied in a number of case studies around Australia ranging in scale from locally confined estuaries to the Tasman Sea between Australia and New Zealand. In time we expect the range of models offered will include biogeochemical models.
Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.
Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas
2011-01-01
The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
Research of x-ray nondestructive detector for high-speed running conveyor belt with steel wire ropes
NASA Astrophysics Data System (ADS)
Wang, Junfeng; Miao, Changyun; Wang, Wei; Lu, Xiaocui
2008-03-01
An X-ray nondestructive detector for high-speed running conveyor belt with steel wire ropes is researched in the paper. The principle of X-ray nondestructive testing (NDT) is analyzed, the general scheme of the X-ray nondestructive testing system is proposed, and the nondestructive detector for high-speed running conveyor belt with steel wire ropes is developed. The hardware of system is designed with Xilinx's VIRTEX-4 FPGA that embeds PowerPC and MAC IP core, and its network communication software based on TCP/IP protocol is programmed by loading LwIP to PowerPC. The nondestructive testing of high-speed conveyor belt with steel wire ropes and network transfer function are implemented. It is a strong real-time system with rapid scanning speed, high reliability and remotely nondestructive testing function. The nondestructive detector can be applied to the detection of product line in industry.
Optimizing Mars Airplane Trajectory with the Application Navigation System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Riley, Derek
2004-01-01
Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.
The Error Reporting in the ATLAS TDAQ System
NASA Astrophysics Data System (ADS)
Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos
2015-05-01
The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one of the available class constructors and send this instance to ERS. This paper presents the original design solutions exploited for the ERS implementation and describes how it was used during the first ATLAS run period. The cross-system error reporting standardization introduced by ERS was one of the key points for the successful implementation of automated mechanisms for online error recovery.
NASA Astrophysics Data System (ADS)
McGill, P.; Neuhauser, D.; Romanowicz, B.
2008-12-01
The Monterey Ocean-Bottom Broadband (MOBB) seismic station was installed in April 2003, 40 km offshore from the central coast of California at a seafloor depth of 1000 m. It comprises a three-component broadband seismometer system (Guralp CMG-1T), installed in a hollow PVC caisson and buried under the seafloor; a current meter; and a differential pressure gauge. The station has been operating continuously since installation with no connection to the shore. Three times each year, the station is serviced with the aid of a Remotely Operated Vehicle (ROV) to change the batteries and retrieve the seismic data. In February 2009, the MOBB system will be connected to the Monterey Accelerated Research System (MARS) seafloor cabled observatory. The NSF-funded MARS observatory comprises a 52 km electro-optical cable that extends from a shore facility in Moss Landing out to a seafloor node in Monterey Bay. Once installation is completed in November 2008, the node will provide power and data to as many as eight science experiments through underwater electrical connectors. The MOBB system is located 3 km from the MARS node, and the two will be connected with an extension cable installed by an ROV with the aid of a cable-laying toolsled. The electronics module in the MOBB system is being refurbished to support the connection to the MARS observatory. The low-power autonomous data logger has been replaced with a PC/104 computer stack running embedded Linux. This new computer will run an Object Ring Buffer (ORB), which will collect data from the various MOBB sensors and forward it to another ORB running on a computer at the MARS shore station. There, the data will be archived and then forwarded to a third ORB running at the UC Berkeley Seismological Laboratory. Timing will be synchronized among MOBB's multiple acquisition systems using NTP, GPS clock emulation, and a precise timing signal from the MARS cable. The connection to the MARS observatory will provide real-time access to the MOBB data and eliminate the need for frequent servicing visits. The new system uses off-the-shelf hardware and open-source software, and will serve as a prototype for future instruments connected to seafloor cabled observatories.
A battery-run pulsed motor with inherent dynamic electronic switch control
NASA Astrophysics Data System (ADS)
Tripathi, K. C.; Lal, P.; Sarma, P. R.; Sharma, A. K.; Prakash, V.
1980-02-01
A new type of battery-run brushless ferrite-magnet dc motor system is described. Its rotor part consists of a few permanent ceramic (ferrite) magnets uniformly spread on the rim of a disk (wheel) and the stator part consists of electromagnets placed in such a way that when energized, they always form a repulsive couple to rotate the disk. A sensor coil is placed to give an induced pulse signal, which acts as an inherent dynamic switching time control for the automatic electronic control system. Control of speed, brake system, and safety measures are also discussed. Experimental values for the present system are given. Some possible applications are suggested.
Research on pressure control of pressurizer in pressurized water reactor nuclear power plant
NASA Astrophysics Data System (ADS)
Dai, Ling; Yang, Xuhong; Liu, Gang; Ye, Jianhua; Qian, Hong; Xue, Yang
2010-07-01
Pressurizer is one of the most important components in the nuclear reactor system. Its function is to keep the pressure of the primary circuit. It can prevent shutdown of the system from the reactor accident under the normal transient state while keeping the setting value in the normal run-time. This paper is mainly research on the pressure system which is running in the Daya Bay Nuclear Power Plant. A conventional PID controller and a fuzzy controller are designed through analyzing the dynamic characteristics and calculating the transfer function. Then a fuzzy PID controller is designed by analyzing the results of two controllers. The fuzzy PID controller achieves the optimal control system finally.
Završnik, Jernej; Pišot, Rado; Šimunič, Boštjan; Kokol, Peter; Blažun Vošner, Helena
2017-02-01
Objective To investigate associations between running speeds and contraction times in 8- to 13-year-old children. Method This longitudinal study analyzed tensiomyographic measurements of vastus lateralis and biceps femoris muscles' contraction times and maximum running speeds in 107 children (53 boys, 54 girls). Data were evaluated using multiple correspondence analysis. Results A gender difference existed between the vastus lateralis contraction times and running speeds. The running speed was less dependent on vastus lateralis contraction times in boys than in girls. Analysis of biceps femoris contraction times and running speeds revealed that running speeds of boys were much more structurally associated with contraction times than those of girls, for whom the association seemed chaotic. Conclusion Joint category plots showed that contraction times of biceps femoris were associated much more closely with running speed than those of the vastus lateralis muscle. These results provide insight into a new dimension of children's development.
The Effect of Increasing Inertia upon Vertical Ground Reaction Forces during Locomotion
NASA Technical Reports Server (NTRS)
DeWitt, John K.; Hagan, R. Donald; Cromwell, Ronita L.
2007-01-01
The addition of inertia to exercising astronauts could increase ground reaction forces and potentially provide a greater health benefit. However, conflicting results have been reported regarding the adaptations to additional mass (inertia) without additional net weight (gravitational force) during locomotion. We examined the effect of increasing inertia while maintaining net gravitational force on vertical ground reaction forces and kinematics during walking and running. Vertical ground reaction force was measured for ten healthy adults (5 male/5 female) during walking (1.34 m/s) and running (3.13 m/s) using a force-measuring treadmill. Subjects completed locomotion at normal weight and mass, and at 10, 20, 30, and 40% of added inertial force. The added gravitational force was relieved with overhead suspension, so that the net force between the subject and treadmill at rest remained equal to 100% body weight. Peak vertical impact forces and loading rates increased with increased inertia during walking, and decreased during running. As inertia increased, peak vertical propulsive forces decreased during walking and did not change during running. Stride time increased during walking and running, and contact time increased during running. Vertical ground reaction force production and adaptations in gait kinematics were different between walking and running. The increased inertial forces were utilized independently from gravitational forces by the motor control system when determining coordination strategies.
Prototype methodology for obtaining cloud seeding guidance from HRRR model data
NASA Astrophysics Data System (ADS)
Dawson, N.; Blestrud, D.; Kunkel, M. L.; Waller, B.; Ceratto, J.
2017-12-01
Weather model data, along with real time observations, are critical to determine whether atmospheric conditions are prime for super-cooled liquid water during cloud seeding operations. Cloud seeding groups can either use operational forecast models, or run their own model on a computer cluster. A custom weather model provides the most flexibility, but is also expensive. For programs with smaller budgets, openly-available operational forecasting models are the de facto method for obtaining forecast data. The new High-Resolution Rapid Refresh (HRRR) model (3 x 3 km grid size), developed by the Earth System Research Laboratory (ESRL), provides hourly model runs with 18 forecast hours per run. While the model cannot be fine-tuned for a specific area or edited to provide cloud-seeding-specific output, model output is openly available on a near-real-time basis. This presentation focuses on a prototype methodology for using HRRR model data to create maps which aid in near-real-time cloud seeding decision making. The R programming language is utilized to run a script on a Windows® desktop/laptop computer either on a schedule (such as every half hour) or manually. The latest HRRR model run is downloaded from NOAA's Operational Model Archive and Distribution System (NOMADS). A GRIB-filter service, provided by NOMADS, is used to obtain surface and mandatory pressure level data for a subset domain which greatly cuts down on the amount of data transfer. Then, a set of criteria, identified by the Idaho Power Atmospheric Science Group, is used to create guidance maps. These criteria include atmospheric stability (lapse rates), dew point depression, air temperature, and wet bulb temperature. The maps highlight potential areas where super-cooled liquid water may exist, reasons as to why cloud seeding should not be attempted, and wind speed at flight level.
40 CFR 258.26 - Run-on/run-off control systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Run-on/run-off control systems. 258.26 Section 258.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Operating Criteria § 258.26 Run-on/run-off control systems. (a...
40 CFR 258.26 - Run-on/run-off control systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Run-on/run-off control systems. 258.26 Section 258.26 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS Operating Criteria § 258.26 Run-on/run-off control systems. (a...
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
artdaq: DAQ software development made simple
NASA Astrophysics Data System (ADS)
Biery, Kurt; Flumerfelt, Eric; Freeman, John; Ketchum, Wesley; Lukhanin, Gennadiy; Rechenmacher, Ron
2017-10-01
For a few years now, the artdaq data acquisition software toolkit has provided numerous experiments with ready-to-use components which allow for rapid development and deployment of DAQ systems. Developed within the Fermilab Scientific Computing Division, artdaq provides data transfer, event building, run control, and event analysis functionality. This latter feature includes built-in support for the art event analysis framework, allowing experiments to run art modules for real-time filtering, compression, disk writing and online monitoring. As art, also developed at Fermilab, is also used for offline analysis, a major advantage of artdaq is that it allows developers to easily switch between developing online and offline software. artdaq continues to be improved. Support for an alternate mode of running whereby data from some subdetector components are only streamed if requested has been added; this option will reduce unnecessary DAQ throughput. Real-time reporting of DAQ metrics has been implemented, along with the flexibility to choose the format through which experiments receive the reports; these formats include the Ganglia, Graphite and syslog software packages, along with flat ASCII files. Additionally, work has been performed investigating more flexible modes of online monitoring, including the capability to run multiple online monitoring processes on different hosts, each running its own set of art modules. Finally, a web-based GUI interface through which users can configure details of their DAQ system has been implemented, increasing the ease of use of the system. Already successfully deployed on the LArlAT, DarkSide-50, DUNE 35ton and Mu2e experiments, artdaq will be employed for SBND and is a strong candidate for use on ICARUS and protoDUNE. With each experiment comes new ideas for how artdaq can be made more flexible and powerful. The above improvements will be described, along with potential ideas for the future.
Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.
Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y
2011-05-01
This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine
NASA Technical Reports Server (NTRS)
Case, Jonathan L; White, Kristopher D.
2014-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework (hereafter referred to as the "SPoRT-LIS"). Output from the real-time SPoRT-LIS is used for (1) initializing land surface variables for local modeling applications, and (2) displaying in decision support systems for situational awareness and drought monitoring at select NOAA/National Weather Service (NWS) partner offices. The experimental CONUS run incorporates hourly quantitative precipitation estimation (QPE) from the National Severe Storms Laboratory Multi- Radar Multi-Sensor (MRMS) which will be transitioned into operations at the National Centers for Environmental Prediction (NCEP) in Fall 2014.This paper describes the current and experimental SPoRT-LIS configurations, and documents some of the limitations still remaining through the advent of MRMS precipitation analyses in the SPoRT-LIS land surface model (LSM) simulations.
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Yang, Yuekui
2016-01-01
Satellites always sample the Earth-atmosphere system in a finite temporal resolution. This study investigates the effect of sampling frequency on the satellite-derived Earth radiation budget, with the Deep Space Climate Observatory (DSCOVR) as an example. The output from NASA's Goddard Earth Observing System Version 5 (GEOS-5) Nature Run is used as the truth. The Nature Run is a high spatial and temporal resolution atmospheric simulation spanning a two-year period. The effect of temporal resolution on potential DSCOVR observations is assessed by sampling the full Nature Run data with 1-h to 24-h frequencies. The uncertainty associated with a given sampling frequency is measured by computing means over daily, monthly, seasonal and annual intervals and determining the spread across different possible starting points. The skill with which a particular sampling frequency captures the structure of the full time series is measured using correlations and normalized errors. Results show that higher sampling frequency gives more information and less uncertainty in the derived radiation budget. A sampling frequency coarser than every 4 h results in significant error. Correlations between true and sampled time series also decrease more rapidly for a sampling frequency less than 4 h.
Effects of restricted feeding schedules on circadian organization in squirrel monkeys
NASA Technical Reports Server (NTRS)
Boulos, Z.; Frim, D. M.; Dewey, L. K.; Moore-Ede, M. C.
1989-01-01
Free running circadian rhythms of motor activity, food-motivated lever-pressing, and either drinking (N = 7) or body temperature (N = 3) were recorded from 10 squirrel monkeys maintained in constant illumination with unlimited access to food. Food availability was then restricted to a single unsignaled 3-hour interval each day. The feeding schedule failed to entrain the activity rhythms of 8 monkeys, which continued to free-run. Drinking was almost completely synchronized by the schedule, while body temperature showed a feeding-induced rise superimposed on a free-running rhythm. Nonreinforced lever-pressing showed both a free-running component and a 24-hour component that anticipated the time of feeding. At the termination of the schedule, all recorded variables showed free-running rhythms, but in 3 animals the initial phase of the postschedule rhythms was advanced by several hours, suggesting relative coordination. Of the remaining 2 animals, one exhibited stable entrainment of all 3 recorded rhythms, while the other appeared to entrain temporarily to the feeding schedule. These results indicate that restricted feeding schedules are only a weak zeitgeber for the circadian pacemaker generating free-running rhythms in the squirrel monkey. Such schedules, however, may entrain a separate circadian system responsible for the timing of food-anticipatory changes in behavior and physiology.
ALICE HLT Run 2 performance overview.
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Lindenstruth, Volker;
2017-10-01
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan
1994-01-01
A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.
NASA Astrophysics Data System (ADS)
Randers, Jorgen; Golüke, Ulrich; Wenstøp, Fred; Wenstøp, Søren
2016-11-01
We have made a simple system dynamics model, ESCIMO (Earth System Climate Interpretable Model), which runs on a desktop computer in seconds and is able to reproduce the main output from more complex climate models. ESCIMO represents the main causal mechanisms at work in the Earth system and is able to reproduce the broad outline of climate history from 1850 to 2015. We have run many simulations with ESCIMO to 2100 and beyond. In this paper we present the effects of introducing in 2015 six possible global policy interventions that cost around USD 1000 billion per year - around 1 % of world GDP. We tentatively conclude (a) that these policy interventions can at most reduce the global mean surface temperature - GMST - by up to 0.5 °C in 2050 and up to 1.0 °C in 2100 relative to no intervention. The exception is injection of aerosols into the stratosphere, which can reduce the GMST by more than 1.0 °C in a decade but creates other serious problems. We also conclude (b) that relatively cheap human intervention can keep global warming in this century below +2 °C relative to preindustrial times. Finally, we conclude (c) that run-away warming is unlikely to occur in this century but is likely to occur in the longer run. The ensuing warming is slow, however. In ESCIMO, it takes several hundred years to lift the GMST to +3 °C above preindustrial times through gradual self-reinforcing melting of the permafrost. We call for research to test whether more complex climate models support our tentative conclusions from ESCIMO.
ETHERNET BASED EMBEDDED SYSTEM FOR FEL DIAGNOSTICS AND CONTROLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jianxun Yan; Daniel Sexton; Steven Moore
2006-10-24
An Ethernet based embedded system has been developed to upgrade the Beam Viewer and Beam Position Monitor (BPM) systems within the free-electron laser (FEL) project at Jefferson Lab. The embedded microcontroller was mounted on the front-end I/O cards with software packages such as Experimental Physics and Industrial Control System (EPICS) and Real Time Executive for Multiprocessor System (RTEMS) running as an Input/Output Controller (IOC). By cross compiling with the EPICS, the RTEMS kernel, IOC device supports, and databases all of these can be downloaded into the microcontroller. The first version of the BPM electronics based on the embedded controller wasmore » built and is currently running in our FEL system. The new version of BPM that will use a Single Board IOC (SBIOC), which integrates with an Field Programming Gate Array (FPGA) and a ColdFire embedded microcontroller, is presently under development. The new system has the features of a low cost IOC, an open source real-time operating system, plug&play-like ease of installation and flexibility, and provides a much more localized solution.« less
Closed cycle electric discharge laser design investigation
NASA Technical Reports Server (NTRS)
Baily, P. K.; Smith, R. C.
1978-01-01
Closed cycle CO2 and CO electric discharge lasers were studied. An analytical investigation assessed scale-up parameters and design features for CO2, closed cycle, continuous wave, unstable resonator, electric discharge lasing systems operating in space and airborne environments. A space based CO system was also examined. The program objectives were the conceptual designs of six CO2 systems and one CO system. Three airborne CO2 designs, with one, five, and ten megawatt outputs, were produced. These designs were based upon five minute run times. Three space based CO2 designs, with the same output levels, were also produced, but based upon one year run times. In addition, a conceptual design for a one megawatt space based CO laser system was also produced. These designs include the flow loop, compressor, and heat exchanger, as well as the laser cavity itself. The designs resulted in a laser loop weight for the space based five megawatt system that is within the space shuttle capacity. For the one megawatt systems, the estimated weight of the entire system including laser loop, solar power generator, and heat radiator is less than the shuttle capacity.
NSTX-U Advances in Real-Time C++11 on Linux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Keith G.
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
NSTX-U Advances in Real-Time C++11 on Linux
Erickson, Keith G.
2015-08-14
Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) willmore » serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.« less
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
CLOCS (Computer with Low Context-Switching Time) Operating System Reference Documents
1988-05-06
system are met. In sum, real-time constraints make programming harder in genera420], because they add a whole new dimension - the time dimension - to ...be preempted until it allows itself to be. More is Stored; Less is Computed Alan Jay Smith, of Berkeley, has said that any program can be made five...times as swift to run, at the expense of five times the storage space. While his numbers may be questioned, his premise may not: programs can be made
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
NASA Astrophysics Data System (ADS)
Wolk, S. J.; Petreshock, J. G.; Allen, P.; Bartholowmew, R. T.; Isobe, T.; Cresitello-Dittmar, M.; Dewey, D.
The NASA Great Observatory Chandra was launched July 23, 1999 aboard the space shuttle Columbia. The Chandra Science Center (CXC) runs a monitoring and trends analysis program to maximize the science return from this mission. At the time of the launch, the monitoring portion of this system was in place. The system is a collection of multiple threads and programming methodologies acting cohesively. Real-time data are passed to the CXC. Our real-time tool, ACORN (A Comprehensive object-ORiented Necessity), performs limit checking of performance related hardware. Chandra is in ground contact less than 3 hours a day, so the bulk of the monitoring must take place on data dumped by the spacecraft. To do this, we have written several tools which run off of the CXC data system pipelines. MTA_MONITOR_STATIC, limit checks FITS files containing hardware data. MTA_EVENT_MON and MTA_GRAT_MON create quick look data for the focal place instruments and the transmission gratings. When instruments violate their operational limits, the responsible scientists are notified by email and problem tracking is initiated. Output from all these codes is distributed to CXC scientists via HTML interface.
Burger, C.V.; Finn, J.E.; Holland-Bartels, L.
1995-01-01
Alaskan sockeye salmon typically spawn in lake tributaries during summer (early run) and along clear-water lake shorelines and outlet rivers during fall (late run). Production at the glacially turbid Tustumena Lake and its outlet, the Kasilof River (south-central Alaska), was thought to be limited to a single run of sockeye salmon that spawned in the lake's clear-water tributaries. However, up to 40% of the returning sockeye salmon enumerated by sonar as they entered the lake could not be accounted for during lake tributary surveys, which suggested either substantial counting errors or that a large number of fish spawned in the lake itself. Lake shoreline spawning had not been documented in a glacially turbid system. We determined the distribution and pattern of sockeye salmon spawning in the Tustumena Lake system from 1989 to 1991 based on fish collected and radiotagged in the Kasilof River. Spawning areas and time were determined for 324 of 413 sockeye salmon tracked upstream into the lake after release. Of these, 224 fish spawned in tributaries by mid-August and 100 spawned along shoreline areas of the lake during late August. In an additional effort, a distinct late run was discovered that spawned in the Kasilof River at the end of September. Between tributary and shoreline spawners, run and spawning time distributions were significantly different. The number of shoreline spawners was relatively stable and independent of annual escapement levels during the study, which suggests that the shoreline spawning component is distinct and not surplus production from an undifferentiated run. Since Tustumena Lake has been fully deglaciated for only about 2,000 years and is still significantly influenced by glacier meltwater, this diversification of spawning populations is probably a relatively recent and ongoing event.
Extraordinary flood response of a small urban watershed to short-duration convective rainfall
Smith, J.A.; Miller, A.J.; Baeck, M.L.; Nelson, P.A.; Fisher, G.T.; Meierdiercks, K.L.
2005-01-01
The 9.1 km2 Moores Run watershed in Baltimore, Maryland, experiences floods with unit discharge peaks exceeding 1 m3 s-1 km-2 12 times yr-1, on average. Few, if any, drainage basins in the continental United States have a higher frequency. A thunderstorm system on 13 June 2003 produced the record flood peak (13.2 m3 s-1 km-2) during the 6-yr stream gauging record of Moores Run. In this paper, the hydrometeorology, hydrology, and hydraulics of extreme floods in Moores Run are examined through analyses of the 13 June 2003 storm and flood, as well as other major storm and flood events during the 2000-03 time period. The 13 June 2003 flood, like most floods in Moores Run, was produced by an organized system of thunderstorms. Analyses of the 13 June 2003 storm, which are based on volume scan reflectivity observations from the Sterling, Virginia, WSR-88D radar, are used to characterize the spatial and temporal variability of flash flood producing rainfall. Hydrology of flood response in Moores Run is characterized by highly efficient concentration of runoff through the storm drain network and relatively low runoff ratios. A detailed survey of high-water marks for the 13 June 2003 flood is used, in combination with analyses based on a 2D, depth-averaged open channel flow model (TELEMAC 2D) to examine hydraulics of the 13 June 2003 flood. Hydraulic analyses are used to examine peak discharge estimates for the 13 June flood peak, propagation of flood waves in the Moores Run channel, and 2D flow features associated with channel and floodplain geometry. ?? 2005 American Meteorological Society.
GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed
NASA Astrophysics Data System (ADS)
Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.
2008-12-01
GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.
MBE growth of vertical-cavity surface-emitting laser structure without real-time monitoring
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Tsou, Y.; Tsai, C. M.
1999-05-01
Evaluation of producing a vertical-cavity surface-emitting laser (VCSEL) epitaxial structure by molecular beam epitaxy (MBE) without resorting to any real-time monitoring technique is reported. Continuous grading of Al xGa 1- xAs between x=0.12 to x=0.92 was simply achieved by changing the Al and Ga cell temperatures in no more than three steps per DBR period. Highly uniform DBR and VCSEL structures were demonstrated with a multi-wafer MBE system. Run-to-run standard deviation of reflectance spectrum center wavelength was 0.5% and 1.4% for VCSEL etalon wavelength.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
Using Antelope and Seiscomp in the framework of the Romanian Seismic Network
NASA Astrophysics Data System (ADS)
Marius Craiu, George; Craiu, Andreea; Marmureanu, Alexandru; Neagoe, Cristian
2014-05-01
The National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network currently consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T, STS2, SH-1, S13, Mark l4c, Ranger, Gs21, Mark 22) and acceleration sensors (Episensor Kinemetrics). The primary goal of the real-time seismic network is to provide earthquake parameters from more broad-band stations with a high dynamic range, for more rapid and accurate computation of the locations and magnitudes of earthquakes. The Seedlink and AntelopeTM program packages are completely automated Antelope seismological system is run at the Data Center in Măgurele. The Antelope data acquisition and processing software is running for real-time processing and post processing. The Antelope real-time system provides automatic event detection, arrival picking, event location, and magnitude calculation. It also provides graphical displays and automatic location within near real time after a local, regional or teleseismic event has occurred SeisComP 3 is another automated system that is run at the NIEP and which provides the following features: data acquisition, data quality control, real-time data exchange and processing, network status monitoring, issuing event alerts, waveform archiving and data distribution, automatic event detection and location, easy access to relevant information about stations, waveforms, and recent earthquakes. The main goal of this paper is to compare both of these data acquisitions systems in order to improve their detection capabilities, location accuracy, magnitude and depth determination and reduce the RMS and other location errors.
Phase-locked-loop-based delay-line-free picosecond electro-optic sampling system
NASA Astrophysics Data System (ADS)
Lin, Gong-Ru; Chang, Yung-Cheng
2003-04-01
A delay-line-free, high-speed electro-optic sampling (EOS) system is proposed by employing a delay-time-controlled ultrafast laser diode as the optical probe. Versatile optoelectronic delay-time controllers (ODTCs) based on modified voltage-controlled phase-locked-loop phase-shifting technologies are designed for the laser. The integration of the ODTC circuit and the pulsed laser diode has replaced the traditional optomechanical delay-line module used in the conventional EOS system. This design essentially prevents sampling distortion from misalignment of the probe beam, and overcomes the difficulty in sampling free-running high-speed transients. The maximum tuning range, error, scanning speed, tuning responsivity, and resolution of the ODTC are 3.9π (700°), <5% deviation, 25-2405 ns/s, 0.557 ps/mV, and ˜1 ps, respectively. Free-running wave forms from the analog, digital, and pulsed microwave signals are sampled and compared with those measured by the commercial apparatus.
Improving Resource Selection and Scheduling Using Predictions. Chapter 1
NASA Technical Reports Server (NTRS)
Smith, Warren
2003-01-01
The introduction of computational grids has resulted in several new problems in the area of scheduling that can be addressed using predictions. The first problem is selecting where to run an application on the many resources available in a grid. Our approach to help address this problem is to provide predictions of when an application would start to execute if submitted to specific scheduled computer systems. The second problem is gaining simultaneous access to multiple computer systems so that distributed applications can be executed. We help address this problem by investigating how to support advance reservations in local scheduling systems. Our approaches to both of these problems are based on predictions for the execution time of applications on space- shared parallel computers. As a side effect of this work, we also discuss how predictions of application run times can be used to improve scheduling performance.
NASA Technical Reports Server (NTRS)
Cissom, R. D.; Melton, T. L.; Schneider, M. P.; Lapenta, C. C.
1999-01-01
The objective of this paper is to provide the future ISS scientist and/or engineer a sense of what ISS payload operations are expected to be. This paper uses a real-time operations scenario to convey this message. The real-time operations scenario begins at the initiation of payload operations and runs through post run experiment analysis. In developing this scenario, it is assumed that the ISS payload operations flight and ground capabilities are fully available for use by the payload user community. Emphasis is placed on telescience operations whose main objective is to enable researchers to utilize experiment hardware onboard the International Space Station as if it were located in their terrestrial laboratory. An overview of the Payload Operations Integration Center (POIC) systems and user ground system options is included to provide an understanding of the systems and interfaces users will utilize to perform payload operations. Detailed information regarding POIC capabilities can be found in the POIC Capabilities Document, SSP 50304.
Adaptive DIT-Based Fringe Tracking and Prediction at IOTA
NASA Technical Reports Server (NTRS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2004-01-01
An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Run-and-tumble-like motion of active colloids in viscoelastic media
NASA Astrophysics Data System (ADS)
Lozano, Celia; Ruben Gomez-Solano, Juan; Bechinger, Clemens
2018-01-01
Run-and-tumble motion is a prominent locomotion strategy employed by many living microorganisms. It is characterized by straight swimming intervals (runs), which are interrupted by sudden reorientation events (tumbles). In contrast, directional changes of synthetic microswimmers (active particles) are caused by rotational diffusion, which is superimposed with their translational motion and thus leads to rather continuous and slow particle reorientations. Here we demonstrate that active particles can also perform a swimming motion where translational and orientational changes are disentangled, similar to run-and-tumble. In our system, such motion is realized by a viscoelastic solvent and a periodic modulation of the self-propulsion velocity. Experimentally, this is achieved using light-activated Janus colloids, which are illuminated by a time-dependent laser field. We observe a strong enhancement of the effective translational and rotational motion when the modulation time is comparable to the relaxation time of the viscoelastic fluid. Our findings are explained by the relaxation of the elastic stress, which builds up during the self-propulsion, and is suddenly released when the activity is turned off. In addition to a better understanding of active motion in viscoelastic surroundings, our results may suggest novel steering strategies for synthetic microswimmers in complex environments.
Asymmetry in Determinants of Running Speed During Curved Sprinting.
Ishimura, Kazuhiro; Sakurai, Shinji
2016-08-01
This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.
Lacome, Mathieu; Piscione, Julien; Hager, Jean-Philippe; Carling, Christopher
2016-09-01
To investigate the patterns and performance of substitutions in 18 international 15-a-side men's rugby union matches. A semiautomatic computerized time-motion system compiled 750 performance observations for 375 players (422 forwards, 328 backs). Running and technical-performance measures included total distance run, high-intensity running (>18.0 km/h), number of individual ball possessions and passes, percentage of passes completed, and number of attempted and percentage of successful tackles. A total of 184 substitutions (85.2%) were attributed to tactical and 32 (14.8%) to injury purposes respectively. The mean period for non-injury-purpose substitutions in backs (17.7%) occurred between 70 and 75 min, while forward substitutions peaked equally between 50-55 and 60-65 min (16.4%). Substitutes generally demonstrated improved running performance compared with both starter players who completed games and players whom they replaced (small differences, ES -0.2 to 0.5) in both forwards and backs over their entire time played. There was also a trend for better running performance in forward and back substitutes over their first 10 min of play compared with the final 10 min for replaced players (small to moderate differences, ES 0.3-0.6). Finally, running performance in both forward and back substitutes was generally lower (ES -0.1 to 0.3, unclear or small differences) over their entire 2nd-half time played compared with their first 10 min of play. The impact of substitutes on technical performance was generally considered unclear. This information provides practitioners with practical data relating to the physical and technical contributions of substitutions that subsequently could enable optimization of their impact on match play.
The Influence of Footwear on the Modular Organization of Running.
Santuz, Alessandro; Ekizos, Antonis; Janshen, Lars; Baltzopoulos, Vasilios; Arampatzis, Adamantios
2017-01-01
For most of our history, we predominantly ran barefoot or in minimalist shoes. The advent of modern footwear, however, might have introduced alterations in the motor control of running. The present study investigated shod and barefoot running under the perspective of the modular organization of muscle activation, in order to help addressing the neurophysiological factors underlying human locomotion. On a treadmill, 20 young and healthy inexperienced barefoot runners ran shod and barefoot at preferred speed (2.8 ± 0.4 m/s). Fundamental synergies, containing the time-dependent activation coefficients (motor primitives) and the time-invariant muscle weightings (motor modules), were extracted from 24 ipsilateral electromyographic activities using non-negative matrix factorization. In shod running, the average foot strike pattern was a rearfoot strike, while in barefoot running it was a mid-forefoot strike. In both conditions, five fundamental synergies were enough to describe as many gait cycle phases: weight acceptance, propulsion, arm swing, early swing and late swing. We found the motor primitives to be generally shifted earlier in time during the stance-related phases and later in the swing-related ones in barefoot running. The motor primitive describing the propulsion phase was significantly of shorter duration (peculiarity confirmed by the analysis of the spinal motor output). The arm swing primitive, instead, was significantly wider in the barefoot condition. The motor modules demonstrated analogous organization with some significant differences in the propulsion, arm swing and late swing synergies. Other than to the trivial absence of shoes, the differences might be deputed to the lower ankle gear ratio (and the consequent increased system instability) and to the higher recoil capabilities of the longitudinal foot arch during barefoot compared to shod running.
The Influence of Footwear on the Modular Organization of Running
Santuz, Alessandro; Ekizos, Antonis; Janshen, Lars; Baltzopoulos, Vasilios; Arampatzis, Adamantios
2017-01-01
For most of our history, we predominantly ran barefoot or in minimalist shoes. The advent of modern footwear, however, might have introduced alterations in the motor control of running. The present study investigated shod and barefoot running under the perspective of the modular organization of muscle activation, in order to help addressing the neurophysiological factors underlying human locomotion. On a treadmill, 20 young and healthy inexperienced barefoot runners ran shod and barefoot at preferred speed (2.8 ± 0.4 m/s). Fundamental synergies, containing the time-dependent activation coefficients (motor primitives) and the time-invariant muscle weightings (motor modules), were extracted from 24 ipsilateral electromyographic activities using non-negative matrix factorization. In shod running, the average foot strike pattern was a rearfoot strike, while in barefoot running it was a mid-forefoot strike. In both conditions, five fundamental synergies were enough to describe as many gait cycle phases: weight acceptance, propulsion, arm swing, early swing and late swing. We found the motor primitives to be generally shifted earlier in time during the stance-related phases and later in the swing-related ones in barefoot running. The motor primitive describing the propulsion phase was significantly of shorter duration (peculiarity confirmed by the analysis of the spinal motor output). The arm swing primitive, instead, was significantly wider in the barefoot condition. The motor modules demonstrated analogous organization with some significant differences in the propulsion, arm swing and late swing synergies. Other than to the trivial absence of shoes, the differences might be deputed to the lower ankle gear ratio (and the consequent increased system instability) and to the higher recoil capabilities of the longitudinal foot arch during barefoot compared to shod running. PMID:29213246
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Simulation Study of Evacuation Control Center Operations Analysis
2011-06-01
28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9
Organisational Pattern Driven Recovery Mechanisms
NASA Astrophysics Data System (ADS)
Giacomo, Valentina Di; Presenza, Domenico; Riccucci, Carlo
The process of reaction to system failures and security attacks is strongly influenced by its infrastructural, procedural and organisational settings. Analysis of reaction procedures and practices from different domains (Air Traffic Management, Response to Computer Security Incident, Response to emergencies, recovery in Chemical Process Industry) highlight three key requirements for this activity: smooth collaboration and coordination among responders, accurate monitoring and management of resources and ability to adapt pre-established reaction plans to the actual context. The SERENITY Reaction Mechanisms (SRM) is the subsystem of the SERENITY Run-time Framework aimed to provide SERENITY aware AmI settings (i.e. socio-technical systems with highly distributed dynamic services) with functionalities to implement applications specific reaction strategies. The SRM uses SERENITY Organisational S&D Patterns as run-time models to drive these three key functionalities.
Da Costa, M J; Zaragoza-Santacruz, S; Frost, T J; Halley, J; Pesti, G M
2017-08-01
The objective of this experiment was to evaluate the effects of raising broilers under sex separate and straight-run conditions for 2 broiler strains. Day-old Ross 308 and Ross 708 chicks (n = 1,344) were separated by sex and placed in 48 pens according to the rearing type: sex separate (28 males or 28 females) or straight-run (14 males + 14 females). There were 3 dietary phases: starter (zero to 17 d), grower (17 to 32 d), and finisher (32 to 48 d). Birds' individual BW and feed intakes were measured at 12, 17, 25, 32, 42, and 48 d to evaluate performance. At 33, 43, and 49 d, 4 birds per pen were sampled for carcass yield evaluation. Additionally, from 06:00 to 06:30, 13:00 to 13:30, and 22:00 to 22:30, video records were taken to assess behavior at 45 days. Data were analyzed as CRD with a 2 × 3 factorial arrangement of treatments over time. Throughout the experiment Ross 308 were heavier than the 708, and after 17 d, male pens had the heavier birds, followed by straight-run and then females. Straight-run pens had higher BW CV in comparison with sex separate pens. Sex separate male BW was negatively impacted from 17 to 32 days. On the other hand, females raised sex separate were heavier than females raised straight-run with lower CV from 25 to 41 days. Post 25 d, FCR was the lowest in male pens whereas feed intake was the highest for these pens after 17 days. Overall, males had total carcass cut-up weights higher than straight-run and females at the 3 processing times. The Ross 708 had higher white meat yields, whereas 308 had higher yields for dark meat. Feeding behavior results were not consistent over time. However, from 13:00 to 13:30, birds in female pens spent more time eating, followed by straight-run and then males. In conclusion, raising females in a straight-run system negatively impacted performance and CV, whereas males benefited from straight-run rearing, with the differences being possibly related to feeder space competition. © 2017 Poultry Science Association Inc.
Ihsan, Mohammed; Tan, Frankie; Sahrom, Sofyan; Choo, Hui Cheng; Chia, Michael; Aziz, Abdul Rashid
2017-06-01
This study examined the associations between pre-game wellness and changes in match running performance normalised to either (i) playing time, (ii) post-match RPE or (iii) both playing time and post-match RPE, over the course of a field hockey tournament. Twelve male hockey players were equipped with global positioning system (GPS) units while competing in an international tournament (six matches over 9 days). The following GPS-derived variables, total distance (TD), low-intensity activity (LIA; <15 km/h), high-intensity running (HIR; >15 km/h), high-intensity accelerations (HIACC; >2 m/s 2 ) and decelerations (HIDEC; >-2 m/s 2 ) were acquired and normalised to either (i) playing time, (ii) post-match RPE or (iii) both playing time and post-match RPE. Each morning, players completed ratings on a 0-10 scale for four variables: fatigue, muscle soreness, mood state and sleep quality, with cumulative scores determined as wellness. Associations between match performances and wellness were analysed using Pearson's correlation coefficient. Combined time and RPE normalisation demonstrated the largest associations with Δwellness compared with time or RPE alone for most variables; TD (r = -0.95; -1.00 to -0.82, p = .004), HIR (r = -0.95; -1.00 to -0.83, p = .003), LIA (r = -0.94; -1.00 to -0.81, p = .026), HIACC (r = -0.87; -1.00 to -0.66, p = .004) and HIDEC (r = -0.90; -0.99 to -0.74, p = .008). These findings support the use of wellness measures as a pre-match tool to assist with managing internal load over the course of a field hockey tournament. Highlights Fixtures during international field hockey tournaments are typically congested and impose high physiological demands on an athlete. To minimise decrements in running performance over the course of a tournament, measures to identify players who have sustained high internal loads are logically warranted. The present study examined the association between changes in simple customised psychometric wellness measures, on changes in match running performance normalised to (i) playing time, (ii) post-match RPE and (iii) playing time and post-match RPE, over the course of a field hockey tournament. Changes in match running performance were better associated to changes in wellness (r = -0.87 to -0.95), when running performances were normalised to both time and RPE compared with time or RPE alone. The present findings support the use of wellness measures as a pre-match tool to assist with managing internal load over the course of a field hockey tournament. Improved associations between wellness scores and match running performances were evident, when running variables were normalised to both playing time and post-match RPE.
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Performance of the LHCb RICH detectors during the LHC Run II
NASA Astrophysics Data System (ADS)
Papanestis, A.; D'Ambrosio, C.; LHCb RICH Collaboration
2017-12-01
The LHCb RICH system provides hadron identification over a wide momentum range (2-100 GeV/c). This detector system is key to LHCb's precision flavour physics programme, which has unique sensitivity to physics beyond the standard model. This paper reports on the performance of the LHCb RICH in Run II, following significant changes in the detector and operating conditions. The changes include the refurbishment of significant number of photon detectors, assembled using new vacuum technologies, and the removal of the aerogel radiator. The start of Run II of the LHC saw the beam energy increase to 6.5 TeV per beam and a new trigger strategy for LHCb with full online detector calibration. The RICH information has also been made available for all trigger streams in the High Level Trigger for the first time.
Daily rainfall forecasting for one year in a single run using Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Unnikrishnan, Poornima; Jothiprakash, V.
2018-06-01
Effective modelling and prediction of smaller time step rainfall is reported to be very difficult owing to its highly erratic nature. Accurate forecast of daily rainfall for longer duration (multi time step) may be exceptionally helpful in the efficient planning and management of water resources systems. Identification of inherent patterns in a rainfall time series is also important for an effective water resources planning and management system. In the present study, Singular Spectrum Analysis (SSA) is utilized to forecast the daily rainfall time series pertaining to Koyna watershed in Maharashtra, India, for 365 days after extracting various components of the rainfall time series such as trend, periodic component, noise and cyclic component. In order to forecast the time series for longer time step (365 days-one window length), the signal and noise components of the time series are forecasted separately and then added together. The results of the study show that the method of SSA could extract the various components of the time series effectively and could also forecast the daily rainfall time series for longer duration such as one year in a single run with reasonable accuracy.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Transfer function of analog fiber-optic systems driven by Fabry-Perot lasers: comment
NASA Astrophysics Data System (ADS)
Gyula, Veszely
2006-10-01
A bad notation makes difficult the understanding of the paper of Capmany et al. [J. Opt. Soc. Am. B22, 2099 (2005)]. The reason is that the real time function and the complex time function run into one another.
Periodic spring-mass running over uneven terrain through feedforward control of landing conditions.
Palmer, Luther R; Eaton, Caitrin E
2014-09-01
This work pursues a feedforward control algorithm for high-speed legged locomotion over uneven terrain. Being able to rapidly negotiate uneven terrain without visual or a priori information about the terrain will allow legged systems to be used in time-critical applications and alongside fast-moving humans or vehicles. The algorithm is shown here implemented on a spring-loaded inverted pendulum model in simulation, and can be configured to approach fixed running height over uneven terrain or self-stable terrain following. Offline search identifies unique landing conditions that achieve a desired apex height with a constant stride period over varying ground levels. Because the time between the apex and touchdown events is directly related to ground height, the landing conditions can be computed in real time as continuous functions of this falling time. Enforcing a constant stride period reduces the need for inertial sensing of the apex event, which is nontrivial for physical systems, and allows for clocked feedfoward control of the swing leg.
NASA Technical Reports Server (NTRS)
Simpson, James J.; Harkins, Daniel N.
1993-01-01
Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.
Support for User Interfaces for Distributed Systems
NASA Technical Reports Server (NTRS)
Eychaner, Glenn; Niessner, Albert
2005-01-01
An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.
Ethanol consumption in mice: relationships with circadian period and entrainment.
Trujillo, Jennifer L; Do, David T; Grahame, Nicholas J; Roberts, Amanda J; Gorman, Michael R
2011-03-01
A functional connection between the circadian timing system and alcohol consumption is suggested by multiple lines of converging evidence. Ethanol consumption perturbs physiological rhythms in hormone secretion, sleep, and body temperature; and conversely, genetic and environmental perturbations of the circadian system can alter alcohol intake. A fundamental property of the circadian pacemaker, the endogenous period of its cycle under free-running conditions, was previously shown to differ between selectively bred high- (HAP) and low- (LAP) alcohol preferring replicate 1 mice. To test whether there is a causal relationship between circadian period and ethanol intake, we induced experimental, rather than genetic, variations in free-running period. Male inbred C57Bl/6J mice and replicate 2 male and female HAP2 and LAP2 mice were entrained to light:dark cycles of 26 or 22 h or remained in a standard 24 h cycle. On discontinuation of the light:dark cycle, experimental animals exhibited longer and shorter free-running periods, respectively. Despite robust effects on circadian period and clear circadian rhythms in drinking, these manipulations failed to alter the daily ethanol intake of the inbred strain or selected lines. Likewise, driving the circadian system at long and short periods produced no change in alcohol intake. In contrast with replicate 1 HAP and LAP lines, there was no difference in free-running period between ethanol naïve HAP2 and LAP2 mice. HAP2 mice, however, were significantly more active than LAP2 mice as measured by general home-cage movement and wheel running, a motivated behavior implicating a selection effect on reward systems. Despite a marked circadian regulation of drinking behavior, the free-running and entrained period of the circadian clock does not determine daily ethanol intake. Copyright © 2011 Elsevier Inc. All rights reserved.
Ethanol consumption in mice: relationships with circadian period and entrainment
Trujillo, Jennifer L.; Do, David T.; Grahame, Nicholas J.; Roberts, Amanda J.; Gorman, Michael R.
2011-01-01
A functional connection between the circadian timing system and alcohol consumption is suggested by multiple lines of converging evidence. Ethanol consumption perturbs physiological rhythms in hormone secretion, sleep and body temperature, and conversely, genetic and environmental perturbations of the circadian system can alter alcohol intake. A fundamental property of the circadian pacemaker, the endogenous period of its cycle under free-running conditions, was previously shown to differ between selectively bred High- (HAP) and Low- (LAP) Alcohol Preferring replicate 1 mice. To test whether there is a causal relationship between circadian period and ethanol intake, we induced experimental, rather than genetic, variations in free-running period. Male inbred C57Bl/6J mice and replicate 2 male and female HAP2 and LAP2 mice were entrained to light:dark cycles of 26 h or 22 h or remained in a standard 24 h cycle. Upon discontinuation of the light:dark cycle, experimental animals exhibited longer and shorter free-running periods, respectively. Despite robust effects on circadian period and clear circadian rhythms in drinking, these manipulations failed to alter the daily ethanol intake of the inbred strain or selected lines. Likewise, driving the circadian system at long and short periods produced no change in alcohol intake. In contrast with replicate 1 HAP and LAP lines, there was no difference in free-running period between ethanol naïve HAP2 and LAP2 mice. HAP2 mice, however, were significantly more active than LAP2 mice as measured by general home-cage movement and wheel running, a motivated behavior implicating a selection effect on reward systems. Despite a marked circadian regulation of drinking behavior, the free-running and entrained period of the circadian clock does not determine daily ethanol intake. PMID:20880659
NASA Astrophysics Data System (ADS)
Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh
2017-07-01
In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.
Spiral Bevel Pinion Crack Detection in a Helicopter Gearbox
NASA Technical Reports Server (NTRS)
Decker, Harry J.; Lewicki, David G.
2003-01-01
The vibration resulting from a cracked spiral bevel pinion was recorded and analyzed using existing Health and Usage Monitoring System (HUMS) techniques. A tooth on the input pinion to a Bell OH-58 main rotor gearbox was notched and run for an extended period at severe over-torque condition to facilitate a tooth fracture. Thirteen vibration-based diagnostic metrics were calculated throughout the run. After 101.41 hours of run time, some of the metrics indicated damage. At that point a visual inspection did not reveal any damage. The pinion was then run for another 12 minutes until a proximity probe indicated that a tooth had fractured. This paper discusses the damage detection effectiveness of the different metrics and a comparison of effects of the different accelerometer locations.
The Katydid system for compiling KEE applications to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.
New-style defect inspection system of film
NASA Astrophysics Data System (ADS)
Liang, Yan; Liu, Wenyao; Liu, Ming; Lee, Ronggang
2002-09-01
An inspection system has been developed for on-line detection of film defects, which bases on combination of photoelectric imaging and digital image processing. The system runs in high speed of maximum 60m/min. Moving film is illuminated by LED array which emits even infrared (peak wavelength λp=940nm), and infrared images are obtained with a high quality and high speed CCD camera. The application software based on Visual C++6.0 under Windows processes images in real time by means of such algorithms as median filter, edge detection and projection, etc. The system is made up of four modules, which are introduced in detail in the paper. On-line experiment results shows that the inspection system can recognize defects precisely in high speed and run reliably in practical application.
Validating GPM-based Multi-satellite IMERG Products Over South Korea
NASA Astrophysics Data System (ADS)
Wang, J.; Petersen, W. A.; Wolff, D. B.; Ryu, G. H.
2017-12-01
Accurate precipitation estimates derived from space-borne satellite measurements are critical for a wide variety of applications such as water budget studies, and prevention or mitigation of natural hazards caused by extreme precipitation events. This study validates the near-real-time Early Run, Late Run and the research-quality Final Run Integrated Multi-Satellite Retrievals for GPM (IMERG) using Korean Quantitative Precipitation Estimation (QPE). The Korean QPE data are at a 1-hour temporal resolution and 1-km by 1-km spatial resolution, and were developed by Korea Meteorological Administration (KMA) from a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system utilizing eleven radars over the Republic of Korea. The validation is conducted by comparing Version-04A IMERG (Early, Late and Final Runs) with Korean QPE over the area (124.5E-130.5E, 32.5N-39N) at various spatial and temporal scales during March 2014 through November 2016. The comparisons demonstrate the reasonably good ability of Version-04A IMERG products in estimating precipitation over South Korea's complex topography that consists mainly of hills and mountains, as well as large coastal plains. Based on this data, the Early Run, Late Run and Final Run IMERG precipitation estimates higher than 0.1mm h-1 are about 20.1%, 7.5% and 6.1% higher than Korean QPE at 0.1o and 1-hour resolutions. Detailed comparison results are available at https://wallops-prf.gsfc.nasa.gov/KoreanQPE.V04/index.html
Planning And Reasoning For A Telerobot
NASA Technical Reports Server (NTRS)
Peters, Stephen F.; Mittman, David S.; Collins, Carol E.; O'Meara Callahan, Jacquelyn S.; Rokey, Mark J.
1992-01-01
Document discusses research and development of Telerobot Interactive Planning System (TIPS). Goal in development of TIPS is to enable it to accept instructions from operator, then command run-time controller to execute operations to execute instructions. Challenges in transferring technology from testbed to operational system discussed.
NASA Technical Reports Server (NTRS)
Case. Jonathan; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.
2014-01-01
Flooding and drought are two key forecasting challenges for the Kenya Meteorological Department (KMD). Atmospheric processes leading to excessive precipitation and/or prolonged drought can be quite sensitive to the state of the land surface, which interacts with the boundary layer of the atmosphere providing a source of heat and moisture. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface within weakly-sheared environments, such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in numerical weather prediction models. Enhanced regional modeling capabilities have the potential to improve forecast guidance in support of daily operations and high-end events over east Africa. KMD currently runs a configuration of the Weather Research and Forecasting (WRF) model in real time to support its daily forecasting operations, invoking the Nonhydrostatic Mesoscale Model (NMM) dynamical core. They make use of the National Oceanic and Atmospheric Administration / National Weather Service Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the WRF-NMM model runs on a 7-km regional grid over eastern Africa. Two organizations at the National Aeronautics and Space Administration Marshall Space Flight Center in Huntsville, AL, SERVIR and the Short-term Prediction Research and Transition (SPoRT) Center, have established a working partnership with KMD for enhancing its regional modeling capabilities. To accomplish this goal, SPoRT and SERVIR will provide experimental land surface initialization datasets and model verification capabilities to KMD. To produce a land-surface initialization more consistent with the resolution of the KMD-WRF runs, the NASA Land Information System (LIS) will be run at a comparable resolution to provide real-time, daily soil initialization data in place of interpolated Global Forecast System soil moisture and temperature data. Additionally, real-time green vegetation fraction data from the Visible Infrared Imaging Radiometer Suite will be incorporated into the KMD-WRF runs, once it becomes publicly available from the National Environmental Satellite Data and Information Service. Finally, model verification capabilities will be transitioned to KMD using the Model Evaluation Tools (MET) package, in order to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. The transition of these MET tools will enable KMD to monitor model forecast accuracy in near real time. This presentation will highlight preliminary verification results of WRF runs over east Africa using the LIS land surface initialization.
Energy system contribution to 400-metre and 800-metre track running.
Duffield, Rob; Dawson, Brian; Goodman, Carmel
2005-03-01
As a wide range of values has been reported for the relative energetics of 400-m and 800-m track running events, this study aimed to quantify the respective aerobic and anaerobic energy contributions to these events during track running. Sixteen trained 400-m (11 males, 5 females) and 11 trained 800-m (9 males, 2 females) athletes participated in this study. The participants performed (on separate days) a laboratory graded exercsie test and multiple race time-trials. The relative energy system contribution was calculated by multiple methods based upon measures of race VO2, accumulated oxygen deficit (AOD), blood lactate and estimated phosphocreatine degradation (lactate/PCr). The aerobic/anaerobic energy system contribution (AOD method) to the 400-m event was calculated as 41/59% (male) and 45/55% (female). For the 800-m event, an increased aerobic involvement was noted with a 60/40% (male) and 70/30% (female) respective contribution. Significant (P < 0.05) negative correlations were noted between race performance and anaerobic energy system involvement (lactate/PCr) for the male 800-m and female 400-m events (r = - 0.77 and - 0.87 respectively). These track running data compare well with previous estimates of the relative energy system contributions to the 400-m and 800-m events. Additionally, the relative importance and speed of interaction of the respective metabolic pathways has implications to training for these events.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
The ATLAS Level-1 Topological Trigger performance in Run 2
NASA Astrophysics Data System (ADS)
Riu, Imma; ATLAS Collaboration
2017-10-01
The Level-1 trigger is the first event rate reducing step in the ATLAS detector trigger system, with an output rate of up to 100 kHz and decision latency smaller than 2.5 μs. During the LHC shutdown after Run 1, the Level-1 trigger system was upgraded at hardware, firmware and software levels. In particular, a new electronics sub-system was introduced in the real-time data processing path: the Level-1 Topological trigger system. It consists of a single electronics shelf equipped with two Level-1 Topological processor blades. They receive real-time information from the Level-1 calorimeter and muon triggers, which is processed to measure angles between trigger objects, invariant masses or other kinematic variables. Complementary to other requirements, these measurements are taken into account in the final Level-1 trigger decision. The system was installed and commissioning started in 2015 and continued during 2016. As part of the commissioning, the decisions from individual algorithms were simulated and compared with the hardware response. An overview of the Level-1 Topological trigger system design, commissioning process and impact on several event selections are illustrated.
Determination of production run time and warranty length under system maintenance and trade credits
NASA Astrophysics Data System (ADS)
Tsao, Yu-Chung
2012-12-01
Manufacturers offer a warranty period within which they will fix failed products at no cost to customers. Manufacturers also perform system maintenance when a system is in an out-of-control state. Suppliers provide a credit period to settle the payment to manufacturers. This study considers manufacturer's production and warranty decisions for an imperfect production system under system maintenance and trade credit. Specifically, this study uses the economic production quantity to model the decisions under system maintenance and trade credit. These decisions involve how long the production run time and warranty length should be to maximise total profit. This study provides lemmas for the conditions of optimality and develops a theorem and an algorithm for solving the problems described. Numerical examples illustrate the solution procedures and provide a variety of managerial implications. Results show that simultaneously determining production and warranty decisions is superior to only determining production. This study also discusses the effects of the related parameters on manufacturer's decisions and profits. The results of this study are a useful reference for managerial decision-making and administration.
A distributed control system for the lower-hybrid current drive system on the Tokamak de Varennes
NASA Astrophysics Data System (ADS)
Bagdoo, J.; Guay, J. M.; Chaudron, G.-A.; Decoste, R.; Demers, Y.; Hubbard, A.
1990-08-01
An rf current drive system with an output power of 1 MW at 3.7 GHz is under development for the Tokamak de Varennes. The control system is based on an Ethernet local-area network of programmable logic controllers as front end, personal computers as consoles, and CAMAC-based DSP processors. The DSP processors ensure the PID control of the phase and rf power of each klystron, and the fast protection of high-power rf hardware, all within a 40 μs loop. Slower control and protection, event sequencing and the run-time database are provided by the programmable logic controllers, which communicate, via the LAN, with the consoles. The latter run a commercial process-control console software. The LAN protocol respects the first four layers of the ISO/OSI 802.3 standard. Synchronization with the tokamak control system is provided by commercially available CAMAC timing modules which trigger shot-related events and reference waveform generators. A detailed description of each subsystem and a performance evaluation of the system will be presented.
Transient dynamics capability at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.
1993-01-01
A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.
Brahms Mobile Agents: Architecture and Field Tests
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2002-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.
Does a run/walk strategy decrease cardiac stress during a marathon in non-elite runners?
Hottenrott, Kuno; Ludyga, Sebastian; Schulze, Stephan; Gronwald, Thomas; Jäger, Frank-Stephan
2016-01-01
Although alternating run/walk-periods are often recommended to novice runners, it is unclear, if this particular pacing strategy reduces the cardiovascular stress during prolonged exercise. Therefore, the aim of the study was to compare the effects of two different running strategies on selected cardiac biomarkers as well as marathon performance. Randomized experimental trial in a repeated measure design. Male (n=22) and female subjects (n=20) completed a marathon either with a run/walk strategy or running only. Immediately after crossing the finishing line cardiac biomarkers were assessed in blood taken from the cubital vein. Before (-7 days) and after the marathon (+4 days) subjects also completed an incremental treadmill test. Despite different pacing strategies, run/walk strategy and running only finished the marathon with similar times (04:14:25±00:19:51 vs 04:07:40±00:27:15 [hh:mm:ss]; p=0.377). In both groups, prolonged exercise led to increased B-type natriuretic peptide, creatine kinase MB isoenzyme and myoglobin levels (p<0.001), which returned to baseline 4 days after the marathon. Elevated cTnI concentrations were observable in only two subjects. B-type natriuretic peptide (r=-0.363; p=0.041) and myoglobin levels (r=-0.456; p=0.009) were inversely correlated with the velocity at the individual anaerobic threshold. Run/walk strategy compared to running only reported less muscle pain and fatigue (p=0.006) after the running event. In conclusion, the increase in cardiac biomarkers is a reversible, physiological response to strenuous exercise, indicating temporary stress on the myocyte and skeletal muscle. Although a combined run/walk strategy does not reduce the load on the cardiovascular system, it allows non-elite runners to achieve similar finish times with less (muscle) discomfort. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Alcator C-Mod Digital Plasma Control System
NASA Astrophysics Data System (ADS)
Wolfe, S. M.
2005-10-01
A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.
[Operation room management in quality control certification of a mainstream hospital].
Leidinger, W; Meierhofer, J N; Schüpfer, G
2006-11-01
We report the results of our study concerning the organisation of operating room (OR) capacity planned 1 year in advance. The use of OR is controlled using 2 global controlling numbers: a) the actual time difference between the expected optimal and previously calculated OR running time and b) the punctuality of starting the first operation in each OR. The focal point of the presented OR management concept is a consensus-oriented decision-making and steering process led by a coordinator who achieves a high degree of acceptance by means of comprehensive transparency. Based on the accepted running time, the optimal productivity of OR's (OP_A(%) can be calculated. In this way an increase of the overall capacity (actual running time) of ORs was from 40% to over 55% was achieved. Nevertheless, enthusiasm and teamwork from all persons involved in the system are vital for success as well as a completely independent operating theatre manager. Using this concept over 90% of the requirements for the new certification catalogue for hospitals in Germany was achieved.
Voluntary Wheel Running Induces Exercise-Seeking Behavior in Male Rats: A Behavioral Study.
Naghshvarian, Mojtaba; Zarrindast, Mohammad-Reza; Sajjadi, Seyedeh Fatemeh
2017-12-01
Research evidence shows that exercise is associated with positive physical and mental health. Moreover, exercise and wheel running in rats activate overlapping neural systems and reward system. The most commonly used models for the study of rewarding and aversive effects of exercise involve using treadmill and wheel running paradigms in mice or rats. The purpose of our experiment was to study the influence of continuous voluntary exercise on exercise-seeking behavior. In this experimental study, we used 24 adult male Sprague-Dawley rats weighing 275-300 g on average. Rats were divided into 3 experimental groups for 4 weeks of voluntary wheel running. Each rat ran in the cage equipped with a wheel during 24 hours. A within-subject repeated measure design was employed to evaluate the trend of running and running rates. We found that time and higher levels of exercise will increase exercise tendency. Our results also show that the interaction of exercise within 4 weeks and different levels of exercise can significantly promote rats' exercise-seeking behavior (F = 5.440; df = 2.08; P < 0.001). Our data suggest that voluntary wheel running can increase the likelihood of extreme and obsessive exercising which is a form of non-drug addiction. 2017 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Leisure-time running reduces all-cause and cardiovascular mortality risk.
Lee, Duck-Chul; Pate, Russell R; Lavie, Carl J; Sui, Xuemei; Church, Timothy S; Blair, Steven N
2014-08-05
Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time, and mortality remain uncertain. We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, 18 to 100 years of age (mean age 44 years). Running was assessed on a medical history questionnaire by leisure-time activity. During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately 24% of adults participated in running in this population. Compared with nonrunners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with nonrunners. Weekly running even <51 min, <6 miles, 1 to 2 times, <506 metabolic equivalent-minutes, or <6 miles/h was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits, with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Running, even 5 to 10 min/day and at slow speeds <6 miles/h, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Leisure-Time Running Reduces All-Cause and Cardiovascular Mortality Risk
Lee, Duck-chul; Pate, Russell R.; Lavie, Carl J.; Sui, Xuemei; Church, Timothy S.; Blair, Steven N.
2014-01-01
Background Although running is a popular leisure-time physical activity, little is known about the long-term effects of running on mortality. The dose-response relations between running, as well as the change in running behaviors over time and mortality remain uncertain. Objectives We examined the associations of running with all-cause and cardiovascular mortality risks in 55,137 adults, aged 18 to 100 years (mean age, 44). Methods Running was assessed on the medical history questionnaire by leisure-time activity. Results During a mean follow-up of 15 years, 3,413 all-cause and 1,217 cardiovascular deaths occurred. Approximately, 24% of adults participated in running in this population. Compared with non-runners, runners had 30% and 45% lower adjusted risks of all-cause and cardiovascular mortality, respectively, with a 3-year life expectancy benefit. In dose-response analyses, the mortality benefits in runners were similar across quintiles of running time, distance, frequency, amount, and speed, compared with non-runners. Weekly running even <51 minutes, <6 miles, 1-2 times, <506 metabolic equivalent-minutes, or <6 mph was sufficient to reduce risk of mortality, compared with not running. In the analyses of change in running behaviors and mortality, persistent runners had the most significant benefits with 29% and 50% lower risks of all-cause and cardiovascular mortality, respectively, compared with never-runners. Conclusions Running, even 5-10 minutes per day and slow speeds <6 mph, is associated with markedly reduced risks of death from all causes and cardiovascular disease. This study may motivate healthy but sedentary individuals to begin and continue running for substantial and attainable mortality benefits. PMID:25082581
On the Run-Time Optimization of the Boolean Logic of a Program.
ERIC Educational Resources Information Center
Cadolino, C.; Guazzo, M.
1982-01-01
Considers problem of optimal scheduling of Boolean expression (each Boolean variable represents binary outcome of program module) on single-processor system. Optimization discussed consists of finding operand arrangement that minimizes average execution costs representing consumption of resources (elapsed time, main memory, number of…
A Comparison of Three Commercial Online Vendors.
ERIC Educational Resources Information Center
Hoover, Ryan E.
1979-01-01
Compares database update currency, number of hits, elapsed time, number of offline prints or online types, offline print turnaround time, vendor rates, total search cost, and discounted search cost based on vendor discount rates for five simple searches run on three major commercial vendors' online systems. (CWM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Selecting and implementing the PBS scheduler on an SGI Onyx 2/Orgin 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bittner, S.
1999-06-28
In the Mathematics and Computer Science Division at Argonne, the demand for resources on the Onyx 2 exceeds the resources available for consumption. To distribute these scarce resources effectively, we need a scheduling and resource management package with multiple capabilities. In particular, it must accept standard interactive user logins, allow batch jobs, backfill the system based on available resources, and permit system activities such as accounting to proceed without interruption. The package must include a mechanism to treat the graphic pipes as a schedulable resource. Also required is the ability to create advance reservations, offer dedicated system modes for largemore » resource runs and benchmarking, and track the resources consumed for each job run. Furthermore, our users want to be able to obtain repeatable timing results on job runs. And, of course, package costs must be carefully considered. We explored several options, including NQE and various third-party products, before settling on the PBS scheduler.« less
Oxygen production on Mars and the Moon
NASA Technical Reports Server (NTRS)
Sridhar, K. R.; Vaniman, B.; Miller, S.
1992-01-01
Significant progress was made in the area of in-situ oxygen production in the last year. In order to reduce sealing problems due to thermal expansion mismatch in the disk configuration, several all-Zirconia cells were constructed and are being tested. Two of these cells were run successfully for extended periods of time. One was run for over 200 hours and the other for over 800 hours. These extended runs, along with gas sample analysis, showed that the oxygen being produced is definitely from CO2 and not from air leaks or from the disk material. A new tube system is being constructed that is more rugged, portable, durable, and energy efficient. The important operating parameters of this system will be better controlled compared to previous systems. An electrochemical compressor will also be constructed with a similar configuration. The electrochemical compressor will use less energy since the feed stock is already heated in the separation unit. In addition, it does not have moving parts.
Steering cell migration by alternating blebs and actin-rich protrusions.
Diz-Muñoz, Alba; Romanczuk, Pawel; Yu, Weimiao; Bergert, Martin; Ivanovitch, Kenzo; Salbreux, Guillaume; Heisenberg, Carl-Philipp; Paluch, Ewa K
2016-09-02
High directional persistence is often assumed to enhance the efficiency of chemotactic migration. Yet, cells in vivo usually display meandering trajectories with relatively low directional persistence, and the control and function of directional persistence during cell migration in three-dimensional environments are poorly understood. Here, we use mesendoderm progenitors migrating during zebrafish gastrulation as a model system to investigate the control of directional persistence during migration in vivo. We show that progenitor cells alternate persistent run phases with tumble phases that result in cell reorientation. Runs are characterized by the formation of directed actin-rich protrusions and tumbles by enhanced blebbing. Increasing the proportion of actin-rich protrusions or blebs leads to longer or shorter run phases, respectively. Importantly, both reducing and increasing run phases result in larger spatial dispersion of the cells, indicative of reduced migration precision. A physical model quantitatively recapitulating the migratory behavior of mesendoderm progenitors indicates that the ratio of tumbling to run times, and thus the specific degree of directional persistence of migration, are critical for optimizing migration precision. Together, our experiments and model provide mechanistic insight into the control of migration directionality for cells moving in three-dimensional environments that combine different protrusion types, whereby the proportion of blebs to actin-rich protrusions determines the directional persistence and precision of movement by regulating the ratio of tumbling to run times.
Long-run operation of a reverse electrodialysis system fed with wastewaters.
Luque Di Salvo, Javier; Cosenza, Alessandro; Tamburini, Alessandro; Micale, Giorgio; Cipollina, Andrea
2018-07-01
The performance of a Reverse ElectroDialysis (RED) system fed by unconventional wastewater solutions for long operational periods is analysed for the first time. The experimental campaign was divided in a series of five independent long-runs which combined real wastewater solutions with artificial solutions for at least 10 days. The time evolution of electrical variables, gross power output and net power output, considering also pumping losses, was monitored: power density values obtained during the long-runs are comparable to those found in literature with artificial feed solutions of similar salinity. The increase in pressure drops and the development of membrane fouling were the main detrimental factors of system performance. Pressure drops increase was related to the physical obstruction of the feed channels defined by the spacers, while membrane fouling was related to the adsorption of foulants over the membrane surfaces. In order to manage channels partial clogging and fouling, different kinds of easily implemented in situ backwashings (i.e. neutral, acid, alkaline) were adopted, without the need for an abrupt interruption of the RED unit operation. The application of periodic ElectroDialysis (ED) pulses is also tested as fouling prevention strategy. The results collected suggest that RED can be used to produce electric power by unworthy wastewaters, but additional studies are still needed to characterize better membrane fouling and further improve system performance with these solutions. Copyright © 2018 Elsevier Ltd. All rights reserved.
VERSE - Virtual Equivalent Real-time Simulation
NASA Technical Reports Server (NTRS)
Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel
2005-01-01
Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.
The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.
1980-10-09
the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
Impact of water quality on chlorine demand of corroding copper.
Lytle, Darren A; Liggett, Jennifer
2016-04-01
Copper is widely used in drinking water premise plumbing system materials. In buildings such as hospitals, large and complicated plumbing networks make it difficult to maintain good water quality. Sustaining safe disinfectant residuals throughout a building to protect against waterborne pathogens such as Legionella is particularly challenging since copper and other reactive distribution system materials can exert considerable demands. The objective of this work was to evaluate the impact of pH and orthophosphate on the consumption of free chlorine associated with corroding copper pipes over time. A copper test-loop pilot system was used to control test conditions and systematically meet the study objectives. Chlorine consumption trends attributed to abiotic reactions with copper over time were different for each pH condition tested, and the total amount of chlorine consumed over the test runs increased with increasing pH. Orthophosphate eliminated chlorine consumption trends with elapsed time (i.e., chlorine demand was consistent across entire test runs). Orthophosphate also greatly reduced the total amount of chlorine consumed over the test runs. Interestingly, the total amount of chlorine consumed and the consumption rate were not pH dependent when orthophosphate was present. The findings reflect the complex and competing reactions at the copper pipe wall including corrosion, oxidation of Cu(I) minerals and ions, and possible oxidation of Cu(II) minerals, and the change in chlorine species all as a function of pH. The work has practical applications for maintaining chlorine residuals in premise plumbing drinking water systems including large buildings such as hospitals. Published by Elsevier Ltd.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
NASA Astrophysics Data System (ADS)
Peach, Nicholas
2011-06-01
In this paper, we present a method for a highly decentralized yet structured and flexible approach to achieve systems interoperability by orchestrating data and behavior across distributed military systems and assets with security considerations addressed from the beginning. We describe an architecture of a tool-based design of business processes called Decentralized Operating Procedures (DOP) and the deployment of DOPs onto run time nodes, supporting the parallel execution of each DOP at multiple implementation nodes (fixed locations, vehicles, sensors and soldiers) throughout a battlefield to achieve flexible and reliable interoperability. The described method allows the architecture to; a) provide fine grain control of the collection and delivery of data between systems; b) allow the definition of a DOP at a strategic (or doctrine) level by defining required system behavior through process syntax at an abstract level, agnostic of implementation details; c) deploy a DOP into heterogeneous environments by the nomination of actual system interfaces and roles at a tactical level; d) rapidly deploy new DOPs in support of new tactics and systems; e) support multiple instances of a DOP in support of multiple missions; f) dynamically add or remove run-time nodes from a specific DOP instance as missions requirements change; g) model the passage of, and business reasons for the transmission of each data message to a specific DOP instance to support accreditation; h) run on low powered computers with lightweight tactical messaging. This approach is designed to extend the capabilities of existing standards, such as the Generic Vehicle Architecture (GVA).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Application analysis is facilitated through a number of program profiling tools. The tools vary in their complexity, ease of deployment, design, and profiling detail. Specifically, understand- ing, analyzing, and optimizing is of particular importance for scientific applications where minor changes in code paths and data-structure layout can have profound effects. Understanding how intricate data-structures are accessed and how a given memory system responds is a complex task. In this paper we describe a trace profiling tool, Glprof, specifically aimed to lessen the burden of the programmer to pin-point heavily involved data-structures during an application's run-time, and understand data-structure run-time usage.more » Moreover, we showcase the tool's modularity using additional cache simulation components. We elaborate on the tool's design, and features. Finally we demonstrate the application of our tool in the context of Spec bench- marks using the Glprof profiler and two concurrently running cache simulators, PPC440 and AMD Interlagos.« less
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
Scattering of cylindrical electric field waves from an elliptical dielectric cylindrical shell
NASA Astrophysics Data System (ADS)
Urbanik, E. A.
1982-12-01
This thesis examines the scattering of cylindrical waves by large dielectric scatterers of elliptic cross section. The solution method was the method of moments using a Galerkin approach. Sinusoidal basis and testing functions were used resulting in a higher convergence rate. The higher rate of convergence made it possible for the program to run on the Aeronautical Systems Division's CYBER computers without any special storage methods. This report includes discussion on moment methods, solution of integral equations, and the relationship between the electric field and the source region or self cell singularity. Since the program produced unacceptable run times, no results are contained herein. The importance of this work is the evaluation of the practicality of moment methods using standard techniques. The long run times for a mid-sized scatterer demonstrate the impracticality of moment methods for dielectrics using standard techniques.
NASA Technical Reports Server (NTRS)
Murrow, H. N.; Mccain, W. E.; Rhyne, R. H.
1982-01-01
Measurements of three components of clear air atmospheric turbulence were made with an airplane incorporating a special instrumentation system to provide accurate data resolution to wavelengths of approximately 12,500 m (40,000 ft). Flight samplings covered an altitude range from approximately 500 to 14,000 m (1500 to 46,000 ft) in various meteorological conditions. Individual autocorrelation functions and power spectra for the three turbulence components from 43 data runs taken primarily from mountain wave and jet stream encounters are presented. The flight location (Eastern or Western United States), date, time, run length, intensity level (standard deviation), and values of statistical degrees of freedom for each run are provided in tabular form. The data presented should provide adequate information for detailed meteorological correlations. Some time histories which contain predominant low frequency wave motion are also presented.
Automated CFD Parameter Studies on Distributed Parallel Computers
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Aftosmis, Michael; Pandya, Shishir; Tejnil, Edward; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)
2002-01-01
The objective of the current work is to build a prototype software system which will automated the process of running CFD jobs on Information Power Grid (IPG) resources. This system should remove the need for user monitoring and intervention of every single CFD job. It should enable the use of many different computers to populate a massive run matrix in the shortest time possible. Such a software system has been developed, and is known as the AeroDB script system. The approach taken for the development of AeroDB was to build several discrete modules. These include a database, a job-launcher module, a run-manager module to monitor each individual job, and a web-based user portal for monitoring of the progress of the parameter study. The details of the design of AeroDB are presented in the following section. The following section provides the results of a parameter study which was performed using AeroDB for the analysis of a reusable launch vehicle (RLV). The paper concludes with a section on the lessons learned in this effort, and ideas for future work in this area.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Gender difference and age-related changes in performance at the long-distance duathlon.
Rüst, Christoph A; Knechtle, Beat; Knechtle, Patrizia; Pfeifer, Susanne; Rosemann, Thomas; Lepers, Romuald; Senn, Oliver
2013-02-01
The differences in gender- and the age-related changes in triathlon (i.e., swimming, cycling, and running) performances have been previously investigated, but data are missing for duathlon (i.e., running, cycling, and running). We investigated the participation and performance trends and the gender difference and the age-related decline in performance, at the "Powerman Zofingen" long-distance duathlon (10-km run, 150-km cycle, and 30-km run) from 2002 to 2011. During this period, there were 2,236 finishers (272 women and 1,964 men, respectively). Linear regression analyses for the 3 split times, and the total event time, demonstrated that running and cycling times were fairly stable during the last decade for both male and female elite duathletes. The top 10 overall gender differences in times were 16 ± 2, 17 ± 3, 15 ± 3, and 16 ± 5%, for the 10-km run, 150-km cycle, 30-km run and the overall race time, respectively. There was a significant (p < 0.001) age effect for each discipline and for the total race time. The fastest overall race times were achieved between the 25- and 39-year-olds. Female gender and increasing age were associated with increased performance times when additionally controlled for environmental temperatures and race year. There was only a marginal time period effect ranging between 1.3% (first run) and 9.8% (bike split) with 3.3% for overall race time. In accordance with previous observations in triathlons, the age-related decline in the duathlon performance was more pronounced in running than in cycling. Athletes and coaches can use these findings to plan the career in long-distance duathletes with the age of peak performance between 25 and 39 years for both women and men.
SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert
"SMART Ver. 0.8 Beta" provides a system developer with software tools to create a telerobotic control system, i.e., a system whereby an end-user can interact with mechatronic equipment. It consists of three main components: the SMART Editor (tsmed), the SMART Real-time kernel (rtos), and the SMART Supervisor (gui). The SMART Editor is a graphical icon-based code generation tool for creating end-user systems, given descriptions of SMART modules. The SMART real-time kernel implements behaviors that combine modules representing input devices, sensors, constraints, filters, and robotic devices. Included with this software release is a number of core modules, which can be combinedmore » with additional project and device specific modules to create a telerobotic controller. The SMART Supervisor is a graphical front-end for running a SMART system. It is an optional component of the SMART Environment and utilizes the TeVTk windowing and scripting environment. Although the code contained within this release is complete, and can be utilized for defining, running, and interfacing to a sample end-user SMART system, most systems will include additional project and hardware specific modules developed either by the system developer or obtained independently from a SMART module developer. SMART is a software system designed to integrate the different robots, input devices, sensors and dynamic elements required for advanced modes of telerobotic control. "SMART Ver. 0.8 Beta" defines and implements a telerobotic controller. A telerobotic system consists of combinations of modules that implement behaviors. Each real-time module represents an input device, robot device, sensor, constraint, connection or filter. The underlying theory utilizes non-linear discretized multidimensional network elements to model each individual module, and guarantees that upon a valid connection, the resulting system will perform in a stable fashion. Different combinations of modules implement different behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less
On an LAS-integrated soft PLC system based on WorldFIP fieldbus.
Liang, Geng; Li, Zhijun; Li, Wen; Bai, Yan
2012-01-01
Communication efficiency is lowered and real-time performance is not good enough in discrete control based on traditional WorldFIP field intelligent nodes in case that the scale of control in field is large. A soft PLC system based on WorldFIP fieldbus was designed and implemented. Link Activity Scheduler (LAS) was integrated into the system and field intelligent I/O modules acted as networked basic nodes. Discrete control logic was implemented with the LAS-integrated soft PLC system. The proposed system was composed of configuration and supervisory sub-systems and running sub-systems. The configuration and supervisory sub-system was implemented with a personal computer or an industrial personal computer; running subsystems were designed and implemented based on embedded hardware and software systems. Communication and schedule in the running subsystem was implemented with an embedded sub-module; discrete control and system self-diagnosis were implemented with another embedded sub-module. Structure of the proposed system was presented. Methodology for the design of the sub-systems was expounded. Experiments were carried out to evaluate the performance of the proposed system both in discrete and process control by investigating the effect of network data transmission delay induced by the soft PLC in WorldFIP network and CPU workload on resulting control performances. The experimental observations indicated that the proposed system is practically applicable. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Local search to improve coordinate-based task mapping
Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...
2015-10-31
We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less
Real-time operating system for a multi-laser/multi-detector system
NASA Technical Reports Server (NTRS)
Coles, G.
1980-01-01
The laser-one hazard detector system, used on the Rensselaer Mars rover, is reviewed briefly with respect to the hardware subsystems, the operation, and the results obtained. A multidetector scanning system was designed to improve on the original system. Interactive support software was designed and programmed to implement real time control of the rover or platform with the elevation scanning mast. The formats of both the raw data and the post-run data files were selected. In addition, the interface requirements were selected and some initial hardware-software testing was completed.
Street Viewer: An Autonomous Vision Based Traffic Tracking System.
Bottino, Andrea; Garbo, Alessandro; Loiacono, Carmelo; Quer, Stefano
2016-06-03
The development of intelligent transportation systems requires the availability of both accurate traffic information in real time and a cost-effective solution. In this paper, we describe Street Viewer, a system capable of analyzing the traffic behavior in different scenarios from images taken with an off-the-shelf optical camera. Street Viewer operates in real time on embedded hardware architectures with limited computational resources. The system features a pipelined architecture that, on one side, allows one to exploit multi-threading intensively and, on the other side, allows one to improve the overall accuracy and robustness of the system, since each layer is aimed at refining for the following layers the information it receives as input. Another relevant feature of our approach is that it is self-adaptive. During an initial setup, the application runs in learning mode to build a model of the flow patterns in the observed area. Once the model is stable, the system switches to the on-line mode where the flow model is used to count vehicles traveling on each lane and to produce a traffic information summary. If changes in the flow model are detected, the system switches back autonomously to the learning mode. The accuracy and the robustness of the system are analyzed in the paper through experimental results obtained on several different scenarios and running the system for long periods of time.
A real-time posture monitoring method for rail vehicle bodies based on machine vision
NASA Astrophysics Data System (ADS)
Liu, Dongrun; Lu, Zhaijun; Cao, Tianpei; Li, Tian
2017-06-01
Monitoring vehicle operation conditions has become significantly important in modern high-speed railway systems. However, the operational impact of monitoring the roll angle of vehicle bodies has principally been limited to tilting trains, while few studies have focused on monitoring the running posture of vehicle bodies during operation. We propose a real-time posture monitoring method to fulfil real-time monitoring requirements, by taking rail surfaces and centrelines as detection references. In realising the proposed method, we built a mathematical computational model based on space coordinate transformations to calculate attitude angles of vehicles in operation and vertical and lateral vibration displacements of single measuring points. Moreover, comparison and verification of reliability between system and field results were conducted. Results show that monitoring of the roll angles of car bodies obtained through the system exhibit variation trends similar to those converted from the dynamic deflection of bogie secondary air springs. The monitoring results of two identical conditions were basically the same, highlighting repeatability and good monitoring accuracy. Therefore, our monitoring results were reliable in reflecting posture changes in running railway vehicles.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Improved Algorithms Speed It Up for Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazi, A
2005-09-20
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less
NASA Astrophysics Data System (ADS)
Chwala, Christian; Keis, Felix; Kunstmann, Harald
2016-03-01
The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open-source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to 1 s. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real-time transfer to our database. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine ski resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of 1 s. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.
NASA Astrophysics Data System (ADS)
Chwala, C.; Keis, F.; Kunstmann, H.
2015-11-01
The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted- and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to one second. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real time transfer to our data base. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine skiing resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of one second. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
1979-12-01
ACTIVATED, SYSTEM OPERATION AND TESTING MASCOT PROVIDES: 1. SYSTEM BUILD SOFTWARE COMPILE-TIME CHECKS,a. 2. RUN-TIME SUPERVISOR KERNEL, 3, MONITOR AND...p AD-AOBI 851 SACLANT ASW RESEARCH CENTRE LA SPEZIA 11ITALY) F/B 1711 REAL-TIME, GENERAL-PURPOSE, HIGH-SPEED SIGNAL PROCESSING SYSTEM -- ETC (U) DEC 79...Table of Contents Table of Contents (Cont’d) Page Signal processing language and operating system (w) 23-1 to 23-12 by S. Weinstein A modular signal
NASA Astrophysics Data System (ADS)
Block, J.; Crawl, D.; Artes, T.; Cowart, C.; de Callafon, R.; DeFanti, T.; Graham, J.; Smarr, L.; Srivas, T.; Altintas, I.
2016-12-01
The NSF-funded WIFIRE project has designed a web-based wildfire modeling simulation and visualization tool called FireMap. The tool executes FARSITE to model fire propagation using dynamic weather and fire data, configuration settings provided by the user, and static topography and fuel datasets already built-in. Using GIS capabilities combined with scalable big data integration and processing, FireMap enables simple execution of the model with options for running ensembles by taking the information uncertainty into account. The results are easily viewable, sharable, repeatable, and can be animated as a time series. From these capabilities, users can model real-time fire behavior, analyze what-if scenarios, and keep a history of model runs over time for sharing with collaborators. Firemap runs FARSITE with national and local sensor networks for real-time weather data ingestion and High-Resolution Rapid Refresh (HRRR) weather for forecasted weather. The HRRR is a NOAA/NCEP operational weather prediction system comprised of a numerical forecast model and an analysis/assimilation system to initialize the model. It is run with a horizontal resolution of 3 km, has 50 vertical levels, and has a temporal resolution of 15 minutes. The HRRR requires an Environmental Data Exchange (EDEX) server to receive the feed and generate secondary products out of it for the modeling. UCSD's EDEX server, funded by NSF, makes high-resolution weather data available to researchers worldwide and enables visualization of weather systems and weather events lasting months or even years. The high-speed server aggregates weather data from the University Consortium for Atmospheric Research by way of a subscription service from the Consortium called the Internet Data Distribution system. These features are part of WIFIRE's long term goals to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. Although Firemap is a research product of WIFIRE, developed in collaboration with a number of fire departments, the tool is operational in pilot form for providing big data-driven predictive fire spread modeling. Most recently, FireMap was used for situational awareness in the July 2016 Sand Fire by LA City and LA County Fire Departments.
Mixing and residence times of stormwater runoff in a detection system
Martin, Edward H.
1989-01-01
Five tracer runs were performed on a detention pond and wetlands system to determine mixing and residence times in the system. The data indicate that at low discharges and with large amounts of storage, the pond is moderately mixed with residence times not much less than the theoretical maximum possible under complete mixing. At higher discharges and with less storage in the pond, short-circuiting occurs, reducing the amount of mixing in the pond and appreciably reducing the residence times. The time between pond outlet peak concentrations and wetlands outlet peak concentrations indicate that in the wetlands, mixing increases with decreasing discharge and increasing storage.
Real Time Control of the SSC String Magnets
NASA Astrophysics Data System (ADS)
Calvo, O.; Flora, R.; MacPherson, M.
1987-08-01
The system described in this paper, called SECAR, was designed to control the excitation of a test string of magnets for the proposed Superconducting Super Collider (SSC) and will be used to upgrade the present Tevatron Excitation, Control and Regulation (TECAR) hardware and software . It resides in a VME crate and is controlled by a 68020/68881 based CPU running the application software under a real time operating system named VRTX.
Real time control of the SSC string magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo, O.; Flora, R.; MacPherson, M.
1987-08-01
The system described in this paper, called SECAR, was designed to control the excitation of a test string of magnets for the proposed Superconducting Super Collider (SSC) and will be used to upgrade the present Tevatron Excitation, Control and Regulation (TECAR) hardware and software. It resides in a VME orate and is controlled by a 68020/68881 based CPU running the application software under a real time operating system named VRTX.
Contextual classification on a CDC Flexible Processor system. [for photomapped remote sensing data
NASA Technical Reports Server (NTRS)
Smith, B. W.; Siegel, H. J.; Swain, P. H.
1981-01-01
A potential hardware organization for the Flexible Processor Array is presented. An algorithm that implements a contextual classifier for remote sensing data analysis is given, along with uniprocessor classification algorithms. The Flexible Processor algorithm is provided, as are simulated timings for contextual classifiers run on the Flexible Processor Array and another system. The timings are analyzed for context neighborhoods of sizes three and nine.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Comparison of muscle synergies for running between different foot strike patterns
Nishida, Koji; Hagio, Shota; Kibushi, Benio; Moritani, Toshio; Kouzaki, Motoki
2017-01-01
It is well known that humans run with a fore-foot strike (FFS), a mid-foot strike (MFS) or a rear-foot strike (RFS). A modular neural control mechanism of human walking and running has been discussed in terms of muscle synergies. However, the neural control mechanisms for different foot strike patterns during running have been overlooked even though kinetic and kinematic differences between different foot strike patterns have been reported. Thus, we examined the differences in the neural control mechanisms of human running between FFS and RFS by comparing the muscle synergies extracted from each foot strike pattern during running. Muscle synergies were extracted using non-negative matrix factorization with electromyogram activity recorded bilaterally from 12 limb and trunk muscles in ten male subjects during FFS and RFS running at different speeds (5–15 km/h). Six muscle synergies were extracted from all conditions, and each synergy had a specific function and a single main peak of activity in a cycle. The six muscle synergies were similar between FFS and RFS as well as across subjects and speeds. However, some muscle weightings showed significant differences between FFS and RFS, especially the weightings of the tibialis anterior of the landing leg in synergies activated just before touchdown. The activation patterns of the synergies were also different for each foot strike pattern in terms of the timing, duration, and magnitude of the main peak of activity. These results suggest that the central nervous system controls running by sending a sequence of signals to six muscle synergies. Furthermore, a change in the foot strike pattern is accomplished by modulating the timing, duration and magnitude of the muscle synergy activity and by selectively activating other muscle synergies or subsets of the muscle synergies. PMID:28158258
A Software Architecture for Adaptive Modular Sensing Systems
Lyle, Andrew C.; Naish, Michael D.
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614
A software architecture for adaptive modular sensing systems.
Lyle, Andrew C; Naish, Michael D
2010-01-01
By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.
Mean platelet volume (MPV) predicts middle distance running performance.
Lippi, Giuseppe; Salvagno, Gian Luca; Danese, Elisa; Skafidas, Spyros; Tarperi, Cantor; Guidi, Gian Cesare; Schena, Federico
2014-01-01
Running economy and performance in middle distance running depend on several physiological factors, which include anthropometric variables, functional characteristics, training volume and intensity. Since little information is available about hematological predictors of middle distance running time, we investigated whether some hematological parameters may be associated with middle distance running performance in a large sample of recreational runners. The study population consisted in 43 amateur runners (15 females, 28 males; median age 47 years), who successfully concluded a 21.1 km half-marathon at 75-85% of their maximal aerobic power (VO2max). Whole blood was collected 10 min before the run started and immediately thereafter, and hematological testing was completed within 2 hours after sample collection. The values of lymphocytes and eosinophils exhibited a significant decrease compared to pre-run values, whereas those of mean corpuscular volume (MCV), platelets, mean platelet volume (MPV), white blood cells (WBCs), neutrophils and monocytes were significantly increased after the run. In univariate analysis, significant associations with running time were found for pre-run values of hematocrit, hemoglobin, mean corpuscular hemoglobin (MCH), red blood cell distribution width (RDW), MPV, reticulocyte hemoglobin concentration (RetCHR), and post-run values of MCH, RDW, MPV, monocytes and RetCHR. In multivariate analysis, in which running time was entered as dependent variable whereas age, sex, blood lactate, body mass index, VO2max, mean training regimen and the hematological parameters significantly associated with running performance in univariate analysis were entered as independent variables, only MPV values before and after the trial remained significantly associated with running time. After adjustment for platelet count, the MPV value before the run (p = 0.042), but not thereafter (p = 0.247), remained significantly associated with running performance. The significant association between baseline MPV and running time suggest that hyperactive platelets may exert some pleiotropic effects on endurance performance.
Experimental evaluation of tool run-out in micro milling
NASA Astrophysics Data System (ADS)
Attanasio, Aldo; Ceretti, Elisabetta
2018-05-01
This paper deals with micro milling cutting process focusing the attention on tool run-out measurement. In fact, among the effects of the scale reduction from macro to micro (i.e., size effects) tool run-out plays an important role. This research is aimed at developing an easy and reliable method to measure tool run-out in micro milling based on experimental tests and an analytical model. From an Industry 4.0 perspective this measuring strategy can be integrated into an adaptive system for controlling cutting forces, with the objective of improving the production quality, the process stability, reducing at the same time the tool wear and the machining costs. The proposed procedure estimates the tool run-out parameters from the tool diameter, the channel width, and the phase angle between the cutting edges. The cutting edge phase measurement is based on the force signal analysis. The developed procedure has been tested on data coming from micro milling experimental tests performed on a Ti6Al4V sample. The results showed that the developed procedure can be successfully used for tool run-out estimation.
Architecture of a framework for providing information services for public transport.
García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013-2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energymore » $$\\sqrt{s}=$$ 13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously.« less
Picometer-resolution dual-comb spectroscopy with a free-running fiber laser.
Zhao, Xin; Hu, Guoqing; Zhao, Bofeng; Li, Cui; Pan, Yingling; Liu, Ya; Yasui, Takeshi; Zheng, Zheng
2016-09-19
Dual-comb spectroscopy holds the promise as real-time, high-resolution spectroscopy tools. However, in its conventional schemes, the stringent requirement on the coherence between two lasers requires sophisticated control systems. By replacing control electronics with an all-optical dual-comb lasing scheme, a simplified dual-comb spectroscopy scheme is demonstrated using one dual-wavelength, passively mode-locked fiber laser. Pulses with a intracavity-dispersion-determined repetition-frequency difference are shown to have good mutual coherence and stability. Capability to resolve the comb teeth and a picometer-wide optical spectral resolution are demonstrated using a simple data acquisition system. Energy-efficient, free-running fiber lasers with a small comb-tooth-spacing could enable low-cost dual-comb systems.
Just in Time - Expecting Failure: Do JIT Principles Run Counter to DoD’s Business Nature?
2014-04-01
Regiment. The last several years witnessed both commercial industry and the Department of Defense (DoD) logistics supply chains trending to-ward an...moving items through a production system only when needed. Equating inventory to an avoidable waste instead of adding value to a company directly...Louisiana plant for a week, Honda Motor Company to suspend orders for Japanese-built Honda and Acura models, and pro- ducers of Boeing’s 787 to run billions
2017-06-01
maintenance times from the fleet are randomly resampled when running the model to enhance model realism. The use of a simulation model to represent the...helicopter regiment. 2. Attack Helicopter UH TIGER The EC665, or Airbus Helicopter TIGER, (Figure 3) is a four- bladed , twin- engine multi-role attack...migrated into the automated management system SAP Standard Product Family (SASPF), and the usage clock starts to run with the amount of the current
Methods and Measurements in Real-Time Air Traffic Control System Simulation
1983-04-01
Percent of Variance Consumed by Factors 28 7 Correlations Between ABM II Factor Scores and SE14 1 30 Sector-Density Cell -Based Facter Scores 8 SEX I Cell ...runs for each of 31 subjects under each of 6 sector geometry-traffic density combinations ( cells ). Initial analyses, involving correlations between the...two runs in each cell , indicated very low correlations between the replicates. It was decided that before going further it would be best to conduct a
Malisoux, Laurent; Delattre, Nicolas; Urhausen, Axel; Theisen, Daniel
2017-01-01
Introduction Repetitive loading of the musculoskeletal system is suggested to be involved in the underlying mechanism of the majority of running-related injuries (RRIs). Accordingly, heavier runners are assumed to be at a higher risk of RRI. The cushioning system of modern running shoes is expected to protect runners again high impact forces, and therefore, RRI. However, the role of shoe cushioning in injury prevention remains unclear. The main aim of this study is to investigate the influence of shoe cushioning and body mass on RRI risk, while exploring simultaneously the association between running technique and RRI risk. Methods and analysis This double-blinded randomised controlled trial will involve about 800 healthy leisure-time runners. They will randomly receive one of two running shoe models that will differ in their cushioning properties (ie, stiffness) by ~35%. The participants will perform a running test on an instrumented treadmill at their preferred running speed at baseline. Then they will be followed up prospectively over a 6-month period, during which they will self-report all their sports activities as well as any injury in an internet-based database TIPPS (Training and Injury Prevention Platform for Sports). Cox regression analyses will be used to compare injury risk between the study groups and to investigate the association among training, biomechanical and anatomical risk factors, and injury risk. Ethics and dissemination The study was approved by the National Ethics Committee for Research (Ref: 201701/02 v1.1). Outcomes will be disseminated through publications in peer-reviewed journals, presentations at international conferences, as well as articles in popular magazines and on specialised websites. Trial registration number NCT03115437, Pre-results. PMID:28827268
NASA Astrophysics Data System (ADS)
Dinkins, Matthew; Colley, Stephen
2008-07-01
Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.
Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2015-06-01
Web-based research is becoming ubiquitous in the behavioral sciences, facilitated by convenient, readily available participant pools and relatively straightforward ways of running experiments: most recently, through the development of the HTML5 standard. Although in most studies participants give untimed responses, there is a growing interest in being able to record response times online. Existing data on the accuracy and cross-machine variability of online timing measures are limited, and generally they have compared behavioral data gathered on the Web with similar data gathered in the lab. For this article, we took a more direct approach, examining two ways of running experiments online-Adobe Flash and HTML5 with CSS3 and JavaScript-across 19 different computer systems. We used specialist hardware to measure stimulus display durations and to generate precise response times to visual stimuli in order to assess measurement accuracy, examining effects of duration, browser, and system-to-system variability (such as across different Windows versions), as well as effects of processing power and graphics capability. We found that (a) Flash and JavaScript's presentation and response time measurement accuracy are similar; (b) within-system variability is generally small, even in low-powered machines under high load; (c) the variability of measured response times across systems is somewhat larger; and (d) browser type and system hardware appear to have relatively small effects on measured response times. Modeling of the effects of this technical variability suggests that for most within- and between-subjects experiments, Flash and JavaScript can both be used to accurately detect differences in response times across conditions. Concerns are, however, noted about using some correlational or longitudinal designs online.
Real-time simulation of an automotive gas turbine using the hybrid computer
NASA Technical Reports Server (NTRS)
Costakis, W.; Merrill, W. C.
1984-01-01
A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
NASA Astrophysics Data System (ADS)
Rajib, M. A.; Merwade, V.; Song, C.; Zhao, L.; Kim, I. L.; Zhe, S.
2014-12-01
Setting up of any hydrologic model requires a large amount of efforts including compilation of all the data, creation of input files, calibration and validation. Given the amount of efforts involved, it is possible that models for a watershed get created multiple times by multiple groups or organizations to accomplish different research, educational or policy goals. To reduce the duplication of efforts and enable collaboration among different groups or organizations around an already existing hydrology model, a platform is needed where anyone can search for existing models, perform simple scenario analysis and visualize model results. The creator and users of a model on such a platform can then collaborate to accomplish new research or educational objectives. From this perspective, a prototype cyber-infrastructure (CI), called SWATShare, is developed for sharing, running and visualizing Soil Water Assessment Tool (SWAT) models in an interactive GIS-enabled web environment. Users can utilize SWATShare to publish or upload their own models, search and download existing SWAT models developed by others, run simulations including calibration using high performance resources provided by XSEDE and Cloud. Besides running and sharing, SWATShare hosts a novel spatio-temporal visualization system for SWAT model outputs. In temporal scale, the system creates time-series plots for all the hydrology and water quality variables available along the reach as well as in watershed-level. In spatial scale, the system can dynamically generate sub-basin level thematic maps for any variable at any user-defined date or date range; and thereby, allowing users to run animations or download the data for subsequent analyses. In addition to research, SWATShare can also be used within a classroom setting as an educational tool for modeling and comparing the hydrologic processes under different geographic and climatic settings. SWATShare is publicly available at https://www.water-hub.org/swatshare.
Jobs masonry in LHCb with elastic Grid Jobs
NASA Astrophysics Data System (ADS)
Stagni, F.; Charpentier, Ph
2015-12-01
In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.
Real-time plasma control based on the ISTTOK tomography diagnostica)
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.
2008-10-01
The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.
Real Time Linux - The RTOS for Astronomy?
NASA Astrophysics Data System (ADS)
Daly, P. N.
The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads for this presentation.
Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-12-01
The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.
Porting DubaiSat-2 Flight Software to RTEMS: A Feasibility Study
NASA Astrophysics Data System (ADS)
Khoory, Mohammed; Al Shamsi, Zakareyya; Al Midfa, Ibrahim
2015-09-01
This paper details the process taken by EIAST to study RTEMS as a potential real-time operating system for future space missions. The direction was to attempt to run the DubaiSat-2 flight software under RTEMS 4.10.2 with as little modification to the original source as possible. The implementation used a “translation layer” to translate system calls used by the DS-2 flight software into RTEMS system calls. The RTEMS RTL project was integrated to satisfy the run-time loading requirement, and some differences in the filesystem were encountered and worked around. The implementation was tested for performance and stability, and comparisons were made. The conclusion is that RTEMS provides an adequate base for future space missions with certain advantages over other RTOS’s including cost, a smaller executable size, and control over the source. Drawbacks include the slow speed of loading tasks during runtime and some filesystem integrity issues during unexpected reboots.
Design of Simple Landslide Monitoring System
NASA Astrophysics Data System (ADS)
Meng, Qingjia; Cai, Lingling
2018-01-01
The simple landslide monitoring system is mainly designed for slope, collapse body and surface crack. In the harsh environment, the dynamic displacement data of the disaster body is transmitted to the terminal acquisition system in real time. The main body of the system adopt is PIC32MX795F512. This chip is to realize low power design, wakes the system up through the clock chip, and turns on the switching power supply at set time, which makes the wireless transmission module running during the interval to ensure the maximum battery consumption, so that the system can be stable long term work.
Le Meur, Yann; Bernard, Thierry; Dorel, Sylvain; Abbiss, Chris R; Honnorat, Gérard; Brisswalter, Jeanick; Hausswirth, Christophe
2011-06-01
The purpose of the present study was to examine relationships between athlete's pacing strategies and running performance during an international triathlon competition. Running split times for each of the 107 finishers of the 2009 European Triathlon Championships (42 females and 65 males) were determined with the use of a digital synchronized video analysis system. Five cameras were placed at various positions of the running circuit (4 laps of 2.42 km). Running speed and an index of running speed variability (IRSVrace) were subsequently calculated over each section or running split. Mean running speed over the first 1272 m of lap 1 was 0.76 km·h-1 (+4.4%) and 1.00 km·h-1 (+5.6%) faster than the mean running speed over the same section during the three last laps, for females and males, respectively (P < .001). A significant inverse correlation was observed between RSrace and IRSVrace for all triathletes (females r = -0.41, P = .009; males r = -0.65, P = .002; and whole population -0.76, P = .001). Females demonstrated higher IRSVrace compared with men (6.1 ± 0.5 km·h-1 and 4.0 ± 1.4 km·h-1, for females and males, respectively, P = .001) due to greater decrease in running speed over uphill sections. Pacing during the run appears to play a key role in high-level triathlon performance. Elite triathletes should reduce their initial running speed during international competitions, even if high levels of motivation and direct opponents lead them to adopt an aggressive strategy.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
Validity of PALMS GPS scoring of active and passive travel compared with SenseCam.
Carlson, Jordan A; Jankowska, Marta M; Meseck, Kristin; Godbole, Suneeta; Natarajan, Loki; Raab, Fredric; Demchak, Barry; Patrick, Kevin; Kerr, Jacqueline
2015-03-01
The objective of this study is to assess validity of the personal activity location measurement system (PALMS) for deriving time spent walking/running, bicycling, and in vehicle, using SenseCam (Microsoft, Redmond, WA) as the comparison. Forty adult cyclists wore a Qstarz BT-Q1000XT GPS data logger (Qstarz International Co., Taipei, Taiwan) and SenseCam (camera worn around the neck capturing multiple images every minute) for a mean time of 4 d. PALMS used distance and speed between global positioning system (GPS) points to classify whether each minute was part of a trip (yes/no), and if so, the trip mode (walking/running, bicycling, or in vehicle). SenseCam images were annotated to create the same classifications (i.e., trip yes/no and mode). Contingency tables (2 × 2) and confusion matrices were calculated at the minute level for PALMS versus SenseCam classifications. Mixed-effects linear regression models estimated agreement (mean differences and intraclass correlation coefficients) between PALMS and SenseCam with regard to minutes/day in each mode. Minute-level sensitivity, specificity, and negative predictive value were ≥88%, and positive predictive value was ≥75% for non-mode-specific trip detection. Seventy-two percent to 80% of outdoor walking/running minutes, 73% of bicycling minutes, and 74%-76% of in-vehicle minutes were correctly classified by PALMS. For minutes per day, PALMS had a mean bias (i.e., amount of over or under estimation) of 2.4-3.1 min (11%-15%) for walking/running, 2.3-2.9 min (7%-9%) for bicycling, and 4.3-5 min (15%-17%) for vehicle time. Intraclass correlation coefficients were ≥0.80 for all modes. PALMS has validity for processing GPS data to objectively measure time spent walking/running, bicycling, and in vehicle in population studies. Assessing travel patterns is one of many valuable applications of GPS in physical activity research that can improve our understanding of the determinants and health outcomes of active transportation as well as its effect on physical activity.
GPU real-time processing in NA62 trigger system
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-01-01
A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
Risk factors for injuries in the U.S. Army Ordnance School.
Grier, Tyson L; Morrison, Stephanie; Knapik, Joseph J; Canham-Chervak, Michelle; Jones, Bruce H
2011-11-01
To investigate risk factors for time-loss injuries among soldiers attending U.S. Army Ordnance School Advanced Individual Training. Injuries were obtained from an injury surveillance system. A health questionnaire provided data on age, race, rank, current self-reported injury and illness, and tobacco use. Fitness data was obtained from operations office. Cumulative time-loss injury incidence was 31% for men and 54% for women. For men, higher risk of injury was associated with race, a current self-reported injury, smoking before entering the Army, lower sit-up performance, and slower 2-mile run times. For women, higher risk of injury was associated with race, a current self-reported injury, and slower 2-mile run times. Smoking cessation and fitness training before entry are potential strategies to reduce injuries among soldiers in the Ordnance School.
Cross-Layer Modeling Framework for Energy-Efficient Resilience
2014-04-01
functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in
Quantum algorithms for Gibbs sampling and hitting-time estimation
Chowdhury, Anirban Narayan; Somma, Rolando D.
2017-02-01
In this paper, we present quantum algorithms for solving two problems regarding stochastic processes. The first algorithm prepares the thermal Gibbs state of a quantum system and runs in time almost linear in √Nβ/Ζ and polynomial in log(1/ϵ), where N is the Hilbert space dimension, β is the inverse temperature, Ζ is the partition function, and ϵ is the desired precision of the output state. Our quantum algorithm exponentially improves the dependence on 1/ϵ and quadratically improves the dependence on β of known quantum algorithms for this problem. The second algorithm estimates the hitting time of a Markov chain. Formore » a sparse stochastic matrix Ρ, it runs in time almost linear in 1/(ϵΔ 3/2), where ϵ is the absolute precision in the estimation and Δ is a parameter determined by Ρ, and whose inverse is an upper bound of the hitting time. Our quantum algorithm quadratically improves the dependence on 1/ϵ and 1/Δ of the analog classical algorithm for hitting-time estimation. Finally, both algorithms use tools recently developed in the context of Hamiltonian simulation, spectral gap amplification, and solving linear systems of equations.« less
Skinner, Andrew L; Stone, Christopher J; Doughty, Hazel; Munafò, Marcus R
2018-01-24
Recent developments in smoking cessation support systems and interventions have highlighted the requirement for unobtrusive, passive ways to measure smoking behaviour. A number of systems have been developed for this that either use bespoke sensing technology, or expensive combinations of wearables and smartphones. Here we present StopWatch, a system for passive detection of cigarette smoking that runs on a low-cost smartwatch and does not require additional sensing or a connected smartphone. Our system uses motion data from the accelerometer and gyroscope in an Android smartwatch to detect the signature hand movements of cigarette smoking. It uses machine learning techniques to transform raw motion data into motion features, and in turn into individual drags and instances of smoking. These processes run on the smartwatch, and do not require a smartphone. We conducted preliminary validations of the system in daily smokers (n=13) in laboratory and free-living conditions running on an Android LG G-Watch. In free-living conditions, over a 24-hour period, the system achieved precision of 86% and recall of 71%. StopWatch is a system for passive measurement of cigarette smoking that runs entirely on a commercially available Android smartwatch. It requires no smartphone so the cost is low, and needs no bespoke sensing equipment so participant burden is also low. Performance is currently lower than other more expensive and complex systems, though adequate for some applications. Future developments will focus on enhancing performance, validation on a range of smartwatches, and detection of electronic cigarette use. We present a low-cost, smartwatch-based system for passive detection of cigarette smoking. It uses data from the motion sensors in the watch to identify the signature hand movements of cigarette smoking. The system will provide the detailed measures of individual smoking behaviour needed for context-triggered just-in-time smoking cessation support systems, and to enable just-in-time adaptive interventions. More broadly, the system will enable researchers to obtain detailed measures of individual smoking behaviour in free-living conditions that are free from the recall errors and reporting biases associated with self-report of smoking. © The Author(s) 2018. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco.
Noack, Marko; Partzsch, Johannes; Mayr, Christian G; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene
2015-01-01
Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm(2) and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling.
NASA Technical Reports Server (NTRS)
Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Leighty, Bradley D.
1992-01-01
Simple light-meter circuit used to position knife edge of schlieren optical system to block exactly half light. Enables operator to check quickly position of knife edge between tunnel runs to ascertain whether or not in alignment. Permanent measuring system made part of each schlieren system. If placed in unused area of image plane, or in monitoring beam from mirror knife edge, provides real-time assessment of alignment of schlieren system.
Dedicated heterogeneous node scheduling including backfill scheduling
Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA
2006-07-25
A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.
StatsDB: platform-agnostic storage and understanding of next generation sequencing run metrics
Ramirez-Gonzalez, Ricardo H.; Leggett, Richard M.; Waite, Darren; Thanki, Anil; Drou, Nizar; Caccamo, Mario; Davey, Robert
2014-01-01
Modern sequencing platforms generate enormous quantities of data in ever-decreasing amounts of time. Additionally, techniques such as multiplex sequencing allow one run to contain hundreds of different samples. With such data comes a significant challenge to understand its quality and to understand how the quality and yield are changing across instruments and over time. As well as the desire to understand historical data, sequencing centres often have a duty to provide clear summaries of individual run performance to collaborators or customers. We present StatsDB, an open-source software package for storage and analysis of next generation sequencing run metrics. The system has been designed for incorporation into a primary analysis pipeline, either at the programmatic level or via integration into existing user interfaces. Statistics are stored in an SQL database and APIs provide the ability to store and access the data while abstracting the underlying database design. This abstraction allows simpler, wider querying across multiple fields than is possible by the manual steps and calculation required to dissect individual reports, e.g. ”provide metrics about nucleotide bias in libraries using adaptor barcode X, across all runs on sequencer A, within the last month”. The software is supplied with modules for storage of statistics from FastQC, a commonly used tool for analysis of sequence reads, but the open nature of the database schema means it can be easily adapted to other tools. Currently at The Genome Analysis Centre (TGAC), reports are accessed through our LIMS system or through a standalone GUI tool, but the API and supplied examples make it easy to develop custom reports and to interface with other packages. PMID:24627795
2. DETAIL OF STRUCTURAL SYSTEM FOR CANTILEVERED HOG RUN; BUILDING ...
2. DETAIL OF STRUCTURAL SYSTEM FOR CANTILEVERED HOG RUN; BUILDING 168 (1960 HOG KILL) IS BENEATH HOG RUN - Rath Packing Company, Cantilevered Hog Run, Sycamore Street between Elm & Eighteenth Streets, Waterloo, Black Hawk County, IA
The effect of carbohydrate-electrolyte beverage drinking strategy on 10-mile running performance.
Rollo, Ian; James, Lewis; Croft, Louise; Williams, Clyde
2012-10-01
The purpose of the current study was to investigate the influence of ingesting a carbohydrate-electrolyte (CHO-E) beverage ad libitum or as a prescribed volume on 10-mile run performance and gastrointestinal (GI) discomfort. Nine male recreational runners completed the 10-mile run under the following 3 conditions: no drinking (ND; 0 ml, 0 g CHO), ad libitum drinking (AD; 315 ± 123 ml, 19 ± 7 g CHO), and prescribed drinking (PD; 1,055 ± 90 ml, 64 ± 5 g CHO). During the AD and PD trials, drinks were provided on completion of Miles 2, 4, 6, and 8. Running performance, speed (km/hr), and 10-mile run time were assessed using a global positioning satellite system. The runners' ratings of perceived exertion and GI comfort were recorded on completion of each lap of the 10-mile run. There was a significant difference (p < .10) in performance times for the 10-mile race for the ND, AD, and PD trials, which were 72:05 ± 3:36, 71:14 ± 3:35, and 72:12 ± 3.53 min:s, respectively (p = .094). Ratings of GI comfort were reduced during the PD trial in comparison with both AD and ND trials. In conclusion, runners unaccustomed to habitually drinking CHO-E beverages during training improved their 10-mile race performance with AD drinking a CHO-E beverage, in comparison with drinking a prescribed volume of the same beverage or no drinking.
Ada 9X Project Revision Request Report. Supplement 1
1990-01-01
Non-portable use of operating system primitives or of Ada run time system internals. POSSIBLE SOLUTIONS: Mandate that compilers recognize tasks that...complex than a simple operating system file, the compiler vendor must provide routines to manipulate it (create, copy, move etc .) as a single entity... system , to support fault tolerance, load sharing, change of system operating mode etc . It is highly desirable that such important software be written in
Shoe cleat position during cycling and its effect on subsequent running performance in triathletes.
Viker, Tomas; Richardson, Matt X
2013-01-01
Research with cyclists suggests a decreased load on the lower limbs by placing the shoe cleat more posteriorly, which may benefit subsequent running in a triathlon. This study investigated the effect of shoe cleat position during cycling on subsequent running. Following bike-run training sessions with both aft and traditional cleat positions, 13 well-trained triathletes completed a 30 min simulated draft-legal triathlon cycling leg, followed by a maximal 5 km run on two occasions, once with aft-placed and once with traditionally placed cleats. Oxygen consumption, breath frequency, heart rate, cadence and power output were measured during cycling, while heart rate, contact time, 200 m lap time and total time were measured during running. Cardiovascular measures did not differ between aft and traditional cleat placement during the cycling protocol. The 5 km run time was similar for aft and traditional cleat placement, at 1084 ± 80 s and 1072 ± 64 s, respectively, as was contact time during km 1 and 5, and heart rate and running speed for km 5 for the two cleat positions. Running speed during km 1 was 2.1% ± 1.8 faster (P < 0.05) for the traditional cleat placement. There are no beneficial effects of an aft cleat position on subsequent running in a short distance triathlon.
The Chimera II Real-Time Operating System for advanced sensor-based control applications
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1992-01-01
Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
NASA Technical Reports Server (NTRS)
Case, Jonathan L.; Splitt, Michael E.; Fuell, Kevin K.; Santos, Pablo; Lazarus, Steven M.; Jedlovec, Gary J.
2009-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center, the Florida Institute of Technology, and the NOAA/NWS Weather Forecast Office at Miami, FL (MFL) are collaborating on a project to investigate the impact of using high-resolution, 2-km Moderate Resolution Imaging Spectroradiometer (MODIS) sea surface temperature (SST) composites within the Weather Research and Forecasting (WRF) prediction system. The NWS MFL is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run daily initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution. The project objective is to determine whether more accurate specification of the lower-boundary forcing over water using the MODIS SST composites within the 4-km WRF runs will result in improved sea fluxes and hence, more accurate e\\olutiono f coastal mesoscale circulations and the associated sensible weather elements. SPoRT conducted parallel WRF EMS runs from February to August 2007 identical to the operational runs at NWS MFL except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. During the course of this evaluation, an intriguing case was examined from 6 May 2007, in which lake breezes and convection around Lake Okeechobee evolved quite differently when using the high-resolution SPoRT MODIS SST composites versus the lower-resolution RTG SSTs. This paper will analyze the differences in the 6 May simulations, as well as examine other cases from the summer 2007 in which the WRF-simulated Lake Okeechobee breezes evolved differently due to the SST initialization. The effects on wind fields and precipitation systems will be emphasized, including validation against surface mesonet observations and Stage IV precipitation grids.
Program Aids Visualization Of Data
NASA Technical Reports Server (NTRS)
Truong, L. V.
1995-01-01
Living Color Frame System (LCFS) computer program developed to solve some problems that arise in connection with generation of real-time graphical displays of numerical data and of statuses of systems. Need for program like LCFS arises because computer graphics often applied for better understanding and interpretation of data under observation and these graphics become more complicated when animation required during run time. Eliminates need for custom graphical-display software for application programs. Written in Turbo C++.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Allan Ray
1987-05-01
Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less
NASA Astrophysics Data System (ADS)
Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.
The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.
46 CFR 113.30-25 - Detailed requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...
46 CFR 113.30-25 - Detailed requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...
46 CFR 113.30-25 - Detailed requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...
46 CFR 113.30-25 - Detailed requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...
46 CFR 113.30-25 - Detailed requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... stations must be able to communicate at the same time. (b) The loss of one component of the system must not disable the rest of the system. (c) The system must be able to operate under full load for the same period... must run as close to the fore-and-aft centerline of the vessel as practicable. (l) No cable for voice...
NASA Technical Reports Server (NTRS)
2008-01-01
The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.
XML Flight/Ground Data Dictionary Management
NASA Technical Reports Server (NTRS)
Wright, Jesse; Wiklow, Colette
2007-01-01
A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.
Boswell, Paul G.; Abate-Pella, Daniel; Hewitt, Joshua T.
2015-01-01
Compound identification by liquid chromatography-mass spectrometry (LC-MS) is a tedious process, mainly because authentic standards must be run on a user’s system to be able to confidently reject a potential identity from its retention time and mass spectral properties. Instead, it would be preferable to use shared retention time/index data to narrow down the identity, but shared data cannot be used to reject candidates with an absolute level of confidence because the data are strongly affected by differences between HPLC systems and experimental conditions. However, a technique called “retention projection” was recently shown to account for many of the differences. In this manuscript, we discuss an approach to calculate appropriate retention time tolerance windows for projected retention times, potentially making it possible to exclude candidates with an absolute level of confidence, without needing to have authentic standards of each candidate on hand. In a range of multi-segment gradients and flow rates run among seven different labs, the new approach calculated tolerance windows that were significantly more appropriate for each retention projection than global tolerance windows calculated for retention projections or linear retention indices. Though there were still some small differences between the labs that evidently were not taken into account, the calculated tolerance windows only needed to be relaxed by 50% to make them appropriate for all labs. Even then, 42% of the tolerance windows calculated in this study without standards were narrower than those required by WADA for positive identification, where standards must be run contemporaneously. PMID:26292624
Boswell, Paul G; Abate-Pella, Daniel; Hewitt, Joshua T
2015-09-18
Compound identification by liquid chromatography-mass spectrometry (LC-MS) is a tedious process, mainly because authentic standards must be run on a user's system to be able to confidently reject a potential identity from its retention time and mass spectral properties. Instead, it would be preferable to use shared retention time/index data to narrow down the identity, but shared data cannot be used to reject candidates with an absolute level of confidence because the data are strongly affected by differences between HPLC systems and experimental conditions. However, a technique called "retention projection" was recently shown to account for many of the differences. In this manuscript, we discuss an approach to calculate appropriate retention time tolerance windows for projected retention times, potentially making it possible to exclude candidates with an absolute level of confidence, without needing to have authentic standards of each candidate on hand. In a range of multi-segment gradients and flow rates run among seven different labs, the new approach calculated tolerance windows that were significantly more appropriate for each retention projection than global tolerance windows calculated for retention projections or linear retention indices. Though there were still some small differences between the labs that evidently were not taken into account, the calculated tolerance windows only needed to be relaxed by 50% to make them appropriate for all labs. Even then, 42% of the tolerance windows calculated in this study without standards were narrower than those required by WADA for positive identification, where standards must be run contemporaneously. Copyright © 2015 Elsevier B.V. All rights reserved.
Further Automate Planned Cluster Maintenance to Minimize System Downtime during Maintenance Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R.
This report documents the integration and testing of the automated update process of compute clusters in LC to minimize impact to user productivity. Description: A set of scripts will be written and deployed to further standardize cluster maintenance activities and minimize downtime during planned maintenance windows. Completion Criteria: When the scripts have been deployed and used during planned maintenance windows and a timing comparison is completed between the existing process and the new more automated process, this milestone is complete. This milestone was completed on Aug 23, 2016 on the new CTS1 cluster called Jade when a request to upgrademore » the version of TOSS 3 was initiated while SWL jobs and normal user jobs were running. Jobs that were running when the update to the system began continued to run to completion. New jobs on the cluster started on the new release of TOSS 3. No system administrator action was required. Current update procedures in TOSS 2 begin by killing all users jobs. Then all diskfull nodes are updated, which can take a few hours. Only after the updates are applied are all nodes are rebooted, and then finally put back into service. A system administrator is required for all steps. In terms of human time spent during a cluster OS update, the TOSS 3 automated procedure on Jade took 0 FTE hours. Doing the same update without the Toss Update Tool would have required 4 FTE hours.« less
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Multi-GPGPU Tsunami simulation at Toyama-bay
NASA Astrophysics Data System (ADS)
Furuyama, Shoichi; Ueda, Yuki
2017-07-01
Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.
Method and Apparatus for Monitoring of Daily Activity in Terms of Ground Reaction Forces
NASA Technical Reports Server (NTRS)
Whalen, Robert T. (Inventor); Breit, Gregory A. (Inventor)
2001-01-01
A device to record and analyze habitual daily activity in terms of the history of gait-related musculoskeletal loading is disclosed. The device consists of a pressure-sensing insole placed into the shoe or embedded in a shoe sole, which detects contact of the foot with the ground. The sensor is coupled to a portable battery-powered digital data logger clipped to the shoe or worn around the ankle or waist. During the course of normal daily activity, the system maintains a record of time-of-occurrence of all non-spurious foot-down and lift-off events. Off line, these data are filtered and converted to a history of foot-ground contact times, from which measures of cumulative musculoskeletal loading, average walking- and running-specific gait speed, total time spent walking and running, total number of walking steps and running steps, and total gait-related energy expenditure are estimated from empirical regressions of various gait parameters to the contact time reciprocal. Data are available as cumulative values or as daily averages by menu selection. The data provided by this device are useful for assessment of musculoskeletal and cardiovascular health and risk factors associated with habitual patterns of daily activity.
Incorporating Flexibility in the Design of Repairable Systems - Design of Microgrids
2014-01-01
MICROGRIDS Vijitashwa Pandey1 Annette Skowronska1,2...optimization of complex systems such as a microgrid is however, computationally intensive. The problem is exacerbated if we must incorporate...flexibility in terms of allowing the microgrid architecture and its running protocol to change with time. To reduce the computational effort, this paper
Applications products of aviation forecast models
NASA Technical Reports Server (NTRS)
Garthner, John P.
1988-01-01
A service called the Optimum Path Aircraft Routing System (OPARS) supplies products based on output data from the Naval Oceanographic Global Atmospheric Prediction System (NOGAPS), a model run on a Cyber-205 computer. Temperatures and winds are extracted from the surface to 100 mb, approximately 55,000 ft. Forecast winds are available in six-hour time steps.
Evaluating real-time Java for mission-critical large-scale embedded systems
NASA Technical Reports Server (NTRS)
Sharp, D. C.; Pla, E.; Luecke, K. R.; Hassan, R. J.
2003-01-01
This paper describes benchmarking results on an RT JVM. This paper extends previously published results by including additional tests, by being run on a recently available pre-release version of the first commercially supported RTSJ implementation, and by assessing results based on our experience with avionics systems in other languages.
ALMA test interferometer control system: past experiences and future developments
NASA Astrophysics Data System (ADS)
Marson, Ralph G.; Pokorny, Martin; Kern, Jeff; Stauffer, Fritz; Perrigouard, Alain; Gustafsson, Birger; Ramey, Ken
2004-09-01
The Atacama Large Millimeter Array (ALMA) will, when it is completed in 2012, be the world's largest millimeter & sub-millimeter radio telescope. It will consist of 64 antennas, each one 12 meters in diameter, connected as an interferometer. The ALMA Test Interferometer Control System (TICS) was developed as a prototype for the ALMA control system. Its initial task was to provide sufficient functionality for the evaluation of the prototype antennas. The main antenna evaluation tasks include surface measurements via holography and pointing accuracy, measured at both optical and millimeter wavelengths. In this paper we will present the design of TICS, which is a distributed computing environment. In the test facility there are four computers: three real-time computers running VxWorks (one on each antenna and a central one) and a master computer running Linux. These computers communicate via Ethernet, and each of the real-time computers is connected to the hardware devices via an extension of the CAN bus. We will also discuss our experience with this system and outline changes we are making in light of our experiences.
Optimal chemotaxis in intermittent migration of animal cells
NASA Astrophysics Data System (ADS)
Romanczuk, P.; Salbreux, G.
2015-04-01
Animal cells can sense chemical gradients without moving and are faced with the challenge of migrating towards a target despite noisy information on the target position. Here we discuss optimal search strategies for a chaser that moves by switching between two phases of motion ("run" and "tumble"), reorienting itself towards the target during tumble phases, and performing persistent migration during run phases. We show that the chaser average run time can be adjusted to minimize the target catching time or the spatial dispersion of the chasers. We obtain analytical results for the catching time and for the spatial dispersion in the limits of small and large ratios of run time to tumble time and scaling laws for the optimal run times. Our findings have implications for optimal chemotactic strategies in animal cell migration.
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
The SISMA Project: A pre-operative seismic hazard monitoring system.
NASA Astrophysics Data System (ADS)
Massimiliano Chersich, M. C.; Amodio, A. A. Angelo; Francia, A. F. Andrea; Sparpaglione, C. S. Claudio
2009-04-01
Galileian Plus is currently leading the development, in collaboration with several Italian Universities, of the SISMA (Seismic Information System for Monitoring and Alert) Pilot Project financed by the Italian Space Agency. The system is devoted to the continuous monitoring of the seismic risk and is addressed to support the Italian Civil Protection decisional process. Completion of the Pilot Project is planned at the beginning of 2010. Main scientific paradigm of SISMA is an innovative deterministic approach integrating geophysical models, geodesy and active tectonics. This paper will give a general overview of project along with its progress status and a particular focus will be put on the architectural design details and to the software implementation choices. SISMA is built on top of a software infrastructure developed by Galileian Plus to integrate the scientific programs devoted to the update of seismic risk maps. The main characteristics of the system may be resumed as follow: automatic download of input data; integration of scientific programs; definition and scheduling of chains of processes; monitoring and control of the system through a graphical user interface (GUI); compatibility of the products with ESRI ArcGIS, by mean of post-processing conversion. a) automatic download of input data SISMA needs input data such as GNSS observations, updated seismic catalogue, SAR satellites orbits, etc. that are periodically updated and made available from remote servers through FTP and HTTP. This task is accomplished by a dedicated user configurable component. b) integration of scientific programs SISMA integrates many scientific programs written in different languages (Fortran, C, C++, Perl and Bash) and running into different operating systems. This design requirements lead to the development of a distributed system which is platform independent and is able to run any terminal-based program following few simple predefined rules. c) definition and scheduling of chains of processes Processes are bound each other, in the sense that the output of process "A" should be passed as input to process "B". In this case the process "B" must run automatically as soon as the required input is ready. In SISMA this issue is handled with the "data-driven" activation concept allowing specifying that a process should be started as soon as the needed input datum has been made available in the archive. Moreover SISMA may run processes on a "time-driven" base. The infrastructure of SISMA provides a configurable scheduler allowing the user to define the start time and the periodicity of such processes. d) monitoring and control The operator of the system needs to monitor and control every process running in the system. The SISMA infrastructure allows, through its GUI, the user to: view log messages of running and old processes; stop running processes; monitor processes executions; monitor resource status (available ram, network reachability, and available disk space) for every machine in the system. e) compatibility with ESRI Shapefiles Nearly all the SISMA data has some geographic information, and it is useful to integrate it in a Geographic Information System (GIS). Processors output are georeferred, but they are generated as ASCII files in a proprietary format, and thus cannot directly loaded in a GIS. The infrastructures provides a simple framework for adding filters that reads the data in the proprietary format and converts it to ESRI Shapefile format.
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Costa, Marcelo S; Ardais, Ana Paula; Fioreze, Gabriela T; Mioranzza, Sabrina; Botton, Paulo Henrique S; Portela, Luis Valmor; Souza, Diogo O; Porciúncula, Lisiane O
2012-01-10
Physical exercise protocols have varied widely across studies raising the question of whether there is an optimal intensity, duration and frequency that would produce maximal benefits in attenuating symptoms related to anxiety disorders. Although physical exercise causes modifications in neurotransmission systems, the involvement of neuromodulators such as adenosine has not been investigated after chronic exercise training. Anxiety-related behavior was assessed in the elevated plus-maze in adult and middle-aged rats submitted to 8 weeks of treadmill running 1, 3 or 7 days/week. The speed of running was weekly adjusted to maintain moderate intensity. The hippocampal adenosine A1 and A2A receptors densities were also assessed. Treadmill running protocol was efficient in increasing physical exercise capacity in adult and middle-aged rats. All frequencies of treadmill running equally decreased the time spent in the open arms in adult animals. Middle-aged treadmill control rats presented lower time spent in the open arms than adult treadmill control rats. However, treadmill running one day/week reversed this age effect. Adenosine A1 receptor was not changed between groups, but treadmill running counteracted the age-related increase in adenosine A2A receptors. Although treadmill running, independent from frequency, triggered anxiety in adult rats and treadmill running one day/week reversed the age-related anxiety, no consistent relationship was found with hippocampal adenosine receptors densities. Thus, our data suggest that as a complementary therapy in the management of mental disturbances, the frequency and intensity of physical exercise should be taken into account according to age. Besides, this is the first study reporting the modulation of adenosine receptors after chronic physical exercise, which could be important to prevent neurological disorders associated to increase in adenosine A2A receptors. Copyright © 2011. Published by Elsevier Inc.
Measuring circadian and acute light responses in mice using wheel running activity.
LeGates, Tara A; Altimus, Cara M
2011-02-04
Circadian rhythms are physiological functions that cycle over a period of approximately 24 hours (circadian- circa: approximate and diem: day). They are responsible for timing our sleep/wake cycles and hormone secretion. Since this timing is not precisely 24-hours, it is synchronized to the solar day by light input. This is accomplished via photic input from the retina to the suprachiasmatic nucleus (SCN) which serves as the master pacemaker synchronizing peripheral clocks in other regions of the brain and peripheral tissues to the environmental light dark cycle. The alignment of rhythms to this environmental light dark cycle organizes particular physiological events to the correct temporal niche, which is crucial for survival. For example, mice sleep during the day and are active at night. This ability to consolidate activity to either the light or dark portion of the day is referred to as circadian photoentrainment and requires light input to the circadian clock. Activity of mice at night is robust particularly in the presence of a running wheel. Measuring this behavior is a minimally invasive method that can be used to evaluate the functionality of the circadian system as well as light input to this system. Methods that will covered here are used to examine the circadian clock, light input to this system, as well as the direct influence of light on wheel running behavior.
40 CFR 258.26 - Run-on/run-off control systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... storm; (2) A run-off control system from the active portion of the landfill to collect and control at least the water volume resulting from a 24-hour, 25-year storm. (b) Run-off from the active portion of...
40 CFR 258.26 - Run-on/run-off control systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... storm; (2) A run-off control system from the active portion of the landfill to collect and control at least the water volume resulting from a 24-hour, 25-year storm. (b) Run-off from the active portion of...
40 CFR 258.26 - Run-on/run-off control systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... storm; (2) A run-off control system from the active portion of the landfill to collect and control at least the water volume resulting from a 24-hour, 25-year storm. (b) Run-off from the active portion of...
Sensory System for Implementing a Human—Computer Interface Based on Electrooculography
Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena
2011-01-01
This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes. PMID:22346579
Development and Application of integrated monitoring platform for the Doppler Weather SA-BAND Radar
NASA Astrophysics Data System (ADS)
Zhang, Q.; Sun, J.; Zhao, C. C.; Chen, H. Y.
2017-10-01
The doppler weather SA-band radar is an important part of modern meteorological observation methods, monitoring the running status of radar and the data transmission is important.This paper introduced the composition of radar system and classification of radar data,analysed the characteristics and laws of the radar when is normal or abnormal. Using Macromedia Dreamweaver and PHP, developed the integrated monitoring platform for the doppler weather SA-band radar which could monitor the real-time radar system running status and important performance indicators such as radar power,status parameters and others on Web page,and when the status is abnormal it will trigger the audio alarm.
Run-Curve Design for Energy Saving Operation in a Modern DC-Electrification
NASA Astrophysics Data System (ADS)
Koseki, Takafumi; Noda, Takashi
Mechanical brakes are often used by electric trains. These brakes have a few problems like response speed, coefficient of friction, maintenance cost and so on. As a result, methods for actively using regenerative brakes are required. In this paper, we propose the useful pure electric braking, which would involve ordinary brakes by only regenerative brakes without any mechanical brakes at high speed. Benefits of our proposal include a DC-electrification system with regenerative substations that can return powers to the commercial power system and a train that can use the full regenerative braking force. We furthermore evaluate the effects on running time and energies saved by regenerative substations in the proposed method.
NASA Astrophysics Data System (ADS)
Rougé, Charles; Harou, Julien J.; Pulido-Velazquez, Manuel; Matrosov, Evgenii S.
2017-04-01
The marginal opportunity cost of water refers to benefits forgone by not allocating an additional unit of water to its most economically productive use at a specific location in a river basin at a specific moment in time. Estimating the opportunity cost of water is an important contribution to water management as it can be used for better water allocation or better system operation, and can suggest where future water infrastructure could be most beneficial. Opportunity costs can be estimated using 'shadow values' provided by hydro-economic optimization models. Yet, such models' use of optimization means the models had difficulty accurately representing the impact of operating rules and regulatory and institutional mechanisms on actual water allocation. In this work we use more widely available river basin simulation models to estimate opportunity costs. This has been done before by adding in the model a small quantity of water at the place and time where the opportunity cost should be computed, then running a simulation and comparing the difference in system benefits. The added system benefits per unit of water added to the system then provide an approximation of the opportunity cost. This approximation can then be used to design efficient pricing policies that provide incentives for users to reduce their water consumption. Yet, this method requires one simulation run per node and per time step, which is demanding computationally for large-scale systems and short time steps (e.g., a day or a week). Besides, opportunity cost estimates are supposed to reflect the most productive use of an additional unit of water, yet the simulation rules do not necessarily use water that way. In this work, we propose an alternative approach, which computes the opportunity cost through a double backward induction, first recursively from outlet to headwaters within the river network at each time step, then recursively backwards in time. Both backward inductions only require linear operations, and the resulting algorithm tracks the maximal benefit that can be obtained by having an additional unit of water at any node in the network and at any date in time. Results 1) can be obtained from the results of a rule-based simulation using a single post-processing run, and 2) are exactly the (gross) benefit forgone by not allocating an additional unit of water to its most productive use. The proposed method is applied to London's water resource system to track the value of storage in the city's water supply reservoirs on the Thames River throughout a weekly 85-year simulation. Results, obtained in 0.4 seconds on a single processor, reflect the environmental cost of water shortage. This fast computation allows visualizing the seasonal variations of the opportunity cost depending on reservoir levels, demonstrating the potential of this approach for exploring water values and its variations using simulation models with multiple runs (e.g. of stochastically generated plausible future river inflows).
Williams, Paul T
2012-01-01
Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.
Total hydrocarbon content (THC) testing in liquid oxygen (LOX) systems
NASA Astrophysics Data System (ADS)
Meneghelli, B. J.; Obregon, R. E.; Ross, H. R.; Hebert, B. J.; Sass, J. P.; Dirschka, G. E.
2015-12-01
The measured Total Hydrocarbon Content (THC) levels in liquid oxygen (LOX) systems at Stennis Space Center (SSC) have shown wide variations. Examples of these variations include the following: 1) differences between vendor-supplied THC values and those obtained using standard SSC analysis procedures; and 2) increasing THC values over time at an active SSC test stand in both storage and run vessels. A detailed analysis of LOX sampling techniques, analytical instrumentation, and sampling procedures will be presented. Additional data obtained on LOX system operations and LOX delivery trailer THC values during the past 12-24 months will also be discussed. Field test results showing THC levels and the distribution of the THC's in the test stand run tank, modified for THC analysis via dip tubes, will be presented.
D'Angelo, Lorenzo T; Schneider, Michael; Neugebauer, Paul; Lueth, Tim C
2011-01-01
In this contribution, a new concept for interfacing sensor network nodes (motes) and smartphones is presented for the first time. In the last years, a variety of telemedicine applications on smartphones for data reception, display and transmission have been developed. However, it is not always practical or possible to have a smartphone application running continuously to accomplish these tasks. The presented system allows receiving and storing data continuously using a mote and visualizing or sending it on the go using the smartphone as user interface only when desired. Thus, the processes of data reception and storage run on a safe system consuming less energy and the smartphone's potential along with its battery are not demanded continuously. Both, system concept and realization with an Apple iPhone are presented.
Architecture of a Framework for Providing Information Services for Public Transport
García, Carmelo R.; Pérez, Ricardo; Lorenzo, Álvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained. PMID:22778585
The Laser calibration of the ATLAS Tile Calorimeter during the LHC run 1
Abdallah, J.; Alexa, C.; Coutinho, Y. Amaral; ...
2016-10-12
This article describes the Laser calibration system of the ATLAS hadronic Tile Calorimeter that has been used during the run 1 of the LHC . First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % formore » the end-caps. Lastly, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented.« less
The Laser calibration of the ATLAS Tile Calorimeter during the LHC run 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdallah, J.; Alexa, C.; Coutinho, Y. Amaral
This article describes the Laser calibration system of the ATLAS hadronic Tile Calorimeter that has been used during the run 1 of the LHC . First, the stability of the system associated readout electronics is studied. It is found to be stable with variations smaller than 0.6 %. Then, the method developed to compute the calibration constants, to correct for the variations of the gain of the calorimeter photomultipliers, is described. These constants were determined with a statistical uncertainty of 0.3 % and a systematic uncertainty of 0.2 % for the central part of the calorimeter and 0.5 % formore » the end-caps. Lastly, the detection and correction of timing mis-configuration of the Tile Calorimeter using the Laser system are also presented.« less
Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0
NASA Technical Reports Server (NTRS)
Knox, J. C.
1996-01-01
The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.
Effect of Minimalist Footwear on Running Efficiency: A Randomized Crossover Trial.
Gillinov, Stephen M; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M
2015-05-01
Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Randomized crossover trial. Level 3. Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes.
Minimal algorithm for running an internal combustion engine
NASA Astrophysics Data System (ADS)
Stoica, V.; Borborean, A.; Ciocan, A.; Manciu, C.
2018-01-01
The internal combustion engine control is a well-known topic within automotive industry and is widely used. However, in research laboratories and universities the use of a control system trading is not the best solution because of predetermined operating algorithms, and calibrations (accessible only by the manufacturer) without allowing massive intervention from outside. Laboratory solutions on the market are very expensive. Consequently, in the paper we present a minimal algorithm required to start-up and run an internal combustion engine. The presented solution can be adapted to function on performance microcontrollers available on the market at the present time and at an affordable price. The presented algorithm was implemented in LabView and runs on a CompactRIO hardware platform.
Aozan: an automated post-sequencing data-processing pipeline.
Perrin, Sandrine; Firmo, Cyril; Lemoine, Sophie; Le Crom, Stéphane; Jourdren, Laurent
2017-07-15
Data management and quality control of output from Illumina sequencers is a disk space- and time-consuming task. Thus, we developed Aozan to automatically handle data transfer, demultiplexing, conversion and quality control once a run has finished. This software greatly improves run data management and the monitoring of run statistics via automatic emails and HTML web reports. Aozan is implemented in Java and Python, supported on Linux systems, and distributed under the GPLv3 License at: http://www.outils.genomique.biologie.ens.fr/aozan/ . Aozan source code is available on GitHub: https://github.com/GenomicParisCentre/aozan . aozan@biologie.ens.fr. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
Johnson, Charles S.
1986-01-01
The embedded systems running real-time applications, for which Ada was designed, require their own mechanisms for the management of dynamically allocated storage. There is a need for packages which manage their own internalo structures to control their deallocation as well, due to the performance implications of garbage collection by the KAPSE. This places a requirement upon the design of generic packages which manage generically structured private types built-up from application-defined input types. These kinds of generic packages should figure greatly in the development of lower-level software such as operating systems, schedulers, controllers, and device driver; and will manage structures such as queues, stacks, link-lists, files, and binary multary (hierarchical) trees. Controlled to prevent inadvertent de-designation of dynamic elements, which is implicit in the assignment operation A study was made of the use of limited private type, in solving the problems of controlling the accumulation of anonymous, detached objects in running systems. The use of deallocator prodecures for run-down of application-defined input types during deallocation operations during satellites.
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
NASA Astrophysics Data System (ADS)
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
16 CFR 803.10 - Running of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Running of time. 803.10 Section 803.10 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND INTERPRETATIONS UNDER THE HART-SCOTT-RODINO ANTITRUST IMPROVEMENTS ACT OF 1976 TRANSMITTAL RULES § 803.10 Running of time. (a...
Rasmussen, Sten; Sørensen, Henrik; Parner, Erik Thorlund; Lind, Martin; Nielsen, Rasmus Oestergaard
2018-01-01
Background/aim The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed the same running schedule (preconditioning). Participants (n=447) not censored during the first eight weeks entered the 16-week training period with a focus on either progression in intensity (Sch-I) or volume (Sch-V). A global positioning system collected all data on running. During running, all participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference group) were CIP2 weeks 4.6%; CIP4 weeks 8.2%; CIP8 weeks 13.2%; CIP16 weeks 28.0%. The risk differences (RD) and 95% CI between the two schedules were RD2 weeks=2.9%(−5.7% to 11.6%); RD4 weeks=1.8%(−9.1% to 12.8%); RD8 weeks=−4.7%(−17.5% to 8.1%); RD16 weeks=−14.0% (−36.9% to 8.9%). Conclusion A similar proportion of runners sustained injuries in the two running schedules. PMID:29527322
Adolescent runners: the effect of training shoes on running kinematics.
Mullen, Scott; Toby, E Bruce
2013-06-01
The modern running shoe typically features a large cushioned heel intended to dissipate the energy at heel strike to the knees and hips. The purpose of this study was to evaluate the effect that shoes have upon the running biomechanics among competitive adolescent runners. We wish to answer the question of whether running style is altered in these athletes because of footwear. Twelve competitive adolescent athletes were recruited from local track teams. Each ran on a treadmill in large heel trainers, track flats, and barefoot. Four different speeds were used to test each athlete. The biomechanics were assessed with a motion capture system. Stride length, heel height during posterior swing phase, and foot/ground contact were recorded. Shoe type markedly altered the running biomechanics. The foot/ground contact point showed differences in terms of footwear (P<0.0001) and speed (P=0.000215). When wearing trainers, the athletes landed on their heels 69.79% of the time at all speeds (P<0.001). The heel was the first point of contact <35% of the time in the flat condition and <30% in the barefoot condition. Running biomechanics are significantly altered by shoe type in competitive adolescents. Heavily heeled cushioned trainers promote a heel strike pattern, whereas track flats and barefoot promote a forefoot or midfoot strike pattern. Training in heavily cushioned trainers by the competitive runner has not been clearly shown to be detrimental to performance, but it does change the gait pattern. It is not known whether the altered biomechanics of the heavily heeled cushioned trainer may be detrimental to the adolescent runner who is still developing a running style.
Run-Time Support for Rapid Prototyping
1988-12-01
prototyping. One such system is the Computer-Aided Proto- typing System (CAPS). It combines rapid prototypng with automatic program generation. Some of the...a design database, and a design management system [Ref. 3:p. 66. By using both rapid prototyping and automatic program genera- tion. CAPS will be...Most proto- typing systems perform these functions. CAPS is different in that it combines rapid prototyping with a variant of automatic program
Comparison of the AMDAHL 470V/6 and the IBM 370/195 using benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snider, D.R.; Midlock, J.L.; Hinds, A.R.
1976-03-01
Six groups of jobs were run on the IBM 370/195 at the Applied Mathematics Division (AMD) of Argonne National Laboratory using the current production versions of OS/MVT 21.7 and ASP 3.1. The same jobs were then run on an AMDAHL 470V/6 at the AMDAHL manufacturing facilities in Sunnyvale, California, using the identical operating systems. Performances of the two machines are compared. Differences in the configurations were minimized. The memory size on each machine was the same, all software which had an impact on run times was the same, and the I/O configurations were as similar as possible. This allowed themore » comparison to be based on the relative performance of the two CPU's. As part of the studies preliminary to the acquisition of the IBM 195 in 1972, two of the groups of jobs had been run on a CDC 7600 by CDC personnel in Arden Hills, Minnesota, on an IBM 360/195 by IBM personnel in Poughkeepsie, New York, and on the AMD 360/50/75 production system in June, 1971. 6 figures, 9 tables.« less
Altered Running Economy Directly Translates to Altered Distance-Running Performance.
Hoogkamer, Wouter; Kipp, Shalaya; Spiering, Barry A; Kram, Rodger
2016-11-01
Our goal was to quantify if small (1%-3%) changes in running economy quantitatively affect distance-running performance. Based on the linear relationship between metabolic rate and running velocity and on earlier observations that added shoe mass increases metabolic rate by ~1% per 100 g per shoe, we hypothesized that adding 100 and 300 g per shoe would slow 3000-m time-trial performance by 1% and 3%, respectively. Eighteen male sub-20-min 5-km runners completed treadmill testing, and three 3000-m time trials wearing control shoes and identical shoes with 100 and 300 g of discreetly added mass. We measured rates of oxygen consumption and carbon dioxide production and calculated metabolic rates for the treadmill tests, and we recorded overall running time for the time trials. Adding mass to the shoes significantly increased metabolic rate at 3.5 m·s by 1.11% per 100 g per shoe (95% confidence interval = 0.88%-1.35%). While wearing the control shoes, participants ran the 3000-m time trial in 626.1 ± 55.6 s. Times averaged 0.65% ± 1.36% and 2.37% ± 2.09% slower for the +100-g and +300-g shoes, respectively (P < 0.001). On the basis of a linear fit of all the data, 3000-m time increased 0.78% per added 100 g per shoe (95% confidence interval = 0.52%-1.04%). Adding shoe mass predictably degrades running economy and slows 3000-m time-trial performance proportionally. Our data demonstrate that laboratory-based running economy measurements can accurately predict changes in distance-running race performance due to shoe modifications.
Fundamental movement skills testing in children with cerebral palsy.
Capio, Catherine M; Sit, Cindy H P; Abernethy, Bruce
2011-01-01
To examine the inter-rater reliability and comparative validity of product-oriented and process-oriented measures of fundamental movement skills among children with cerebral palsy (CP). In total, 30 children with CP aged 6 to 14 years (Mean = 9.83, SD = 2.5) and classified in Gross Motor Function Classification System (GMFCS) levels I-III performed tasks of catching, throwing, kicking, horizontal jumping and running. Process-oriented assessment was undertaken using a number of components of the Test of Gross Motor Development (TGMD-2), while product-oriented assessment included measures of time taken, distance covered and number of successful task completions. Cohen's kappa, Spearman's rank correlation coefficient and tests to compare correlated correlation coefficients were performed. Very good inter-rater reliability was found. Process-oriented measures for running and jumping had significant associations with GMFCS, as did seven product-oriented measures for catching, throwing, kicking, running and jumping. Product-oriented measures of catching, kicking and running had stronger associations with GMFCS than the corresponding process-oriented measures. Findings support the validity of process-oriented measures for running and jumping and of product-oriented measures of catching, throwing, kicking, running and jumping. However, product-oriented measures for catching, kicking and running appear to have stronger associations with functional abilities of children with CP, and are thus recommended for use in rehabilitation processes.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
DYNACLIPS (DYNAmic CLIPS): A dynamic knowledge exchange tool for intelligent agents
NASA Technical Reports Server (NTRS)
Cengeloglu, Yilmaz; Khajenoori, Soheil; Linton, Darrell
1994-01-01
In a dynamic environment, intelligent agents must be responsive to unanticipated conditions. When such conditions occur, an intelligent agent may have to stop a previously planned and scheduled course of actions and replan, reschedule, start new activities and initiate a new problem solving process to successfully respond to the new conditions. Problems occur when an intelligent agent does not have enough knowledge to properly respond to the new situation. DYNACLIPS is an implementation of a framework for dynamic knowledge exchange among intelligent agents. Each intelligent agent is a CLIPS shell and runs a separate process under SunOS operating system. Intelligent agents can exchange facts, rules, and CLIPS commands at run time. Knowledge exchange among intelligent agents at run times does not effect execution of either sender and receiver intelligent agent. Intelligent agents can keep the knowledge temporarily or permanently. In other words, knowledge exchange among intelligent agents would allow for a form of learning to be accomplished.
Compact, high-speed algorithm for laying out printed circuit board runs
NASA Astrophysics Data System (ADS)
Zapolotskiy, D. Y.
1985-09-01
A high speed printed circuit connection layout algorithm is described which was developed within the framework of an interactive system for designing two-sided printed circuit broads. For this reason, algorithm speed was considered, a priori, as a requirement equally as important as the inherent demand for minimizing circuit run lengths and the number of junction openings. This resulted from the fact that, in order to provide psychological man/machine compatibility in the design process, real-time dialog during the layout phase is possible only within limited time frames (on the order of several seconds) for each circuit run. The work was carried out for use on an ARM-R automated work site complex based on an SM-4 minicomputer with a 32K-word memory. This limited memory capacity heightened the demand for algorithm speed and also tightened data file structure and size requirements. The layout algorithm's design logic is analyzed. The structure and organization of the data files are described.
Reduced SWAP-C VICTORY Services Execution and Performance Evaluation
2012-08-01
NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) UBT, Inc.,3250 W Big Beaver Rd, Suite 329, Troy ,Mi,48084 8. PERFORMING...Symposium August 14-16 Troy , Michigan 14. ABSTRACT -Executing multiple VICTORY data services, and reading multiple VICTORY-compliant sensors at the...same time resulted in the following performance measurements for the system -0.64 Amps / 3.15 Watts Power Consumption at run-time. -Roughly 0.77% System
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
Operational Implementation Design for the Earth System Prediction Capability (ESPC): A First-Look
2014-02-20
Hybrid NAVDAS-AR data assimilation system assisting by providing dynamic estimates of the error in the background forecasts. 2.1.2 NAVDAS-AR – the...directly assimilates radiances from microwave radiometers and from interferometers and spectrometers in the infrared, and bending angle from Global...real-time analysis (at +3:00). Late in the 12-hr watch (around +8:00), a post-time NAVGEM/NAVDAS-AR run generates the background fields for the next
Belke, Terry W; Christie-Fougere, Melissa M
2006-11-01
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.
Technology Tools for the Tough Tasks: Plug in for Great Outcomes
ERIC Educational Resources Information Center
Simon, Fran
2012-01-01
There are a lot of easy-to-use online tools that can help teachers and administrators with the tough tasks involved in running efficient, responsive, and intentional programs. The efficiencies offered through these systems allow busy educators to spend less time managing information and more time doing the work that matters the most--working with…
Further Education outside the Jurisdiction of Local Education Authorities in Post-War England
ERIC Educational Resources Information Center
Simmons, Robin
2014-01-01
This article revisits the three decades following the end of World War Two--a time when, following the 1944 Education Act, local education authorities (LEAs) were the key agencies responsible for running the education system across England. For the first time, there was a statutory requirement for LEAs to secure adequate facilities for further…
Gates, Timothy J; Noyce, David A
2016-11-01
This manuscript describes the development and evaluation of a conceptual framework for real-time operation of dynamic on-demand extension of the red clearance interval as a countermeasure for red-light-running. The framework includes a decision process for determining, based on the real-time status of vehicles arriving at the intersection, when extension of the red clearance interval should occur and the duration of each extension. A zonal classification scheme was devised to assess whether an approaching vehicle requires additional time to safely clear the intersection based on the remaining phase time, type of vehicle, current speed, and current distance from the intersection. Expected performance of the conceptual framework was evaluated through modeling of replicated field operations using vehicular event data collected as part of this research. The results showed highly accurate classification of red-light-running vehicles needing additional clearance time and relatively few false extension calls from stopping vehicles, thereby minimizing the expected impacts to signal and traffic operations. Based on the recommended parameters, extension calls were predicted to occur once every 26.5cycles. Assuming a 90scycle, 1.5 extensions per hour were expected per approach, with an estimated extension time of 2.30s/h. Although field implementation was not performed, it is anticipated that long-term reductions in targeted red-light-running conflicts and crashes will likely occur if red clearance interval extension systems are implemented at locations where start-up delay on the conflicting approach is generally minimal, such as intersections with lag left-turn phasing. Copyright © 2015 Elsevier Ltd. All rights reserved.
3RIP Evaluation of the Performance of the Search System Using a Realtime Simulation Technique.
ERIC Educational Resources Information Center
Lofstrom, Mats
This report describes a real-time simulation experiment to evaluate the performance of the search and editing system 3RIP, an interactive system written in the language BLISS on a DEC-10 computer. The test vehicle, preliminary test runs, and capacity test are detailed, and the following conclusions are reported: (1) 3RIP performs well up to the…
Partitioning the metabolic cost of human running: a task-by-task approach.
Arellano, Christopher J; Kram, Rodger
2014-12-01
Compared with other species, humans can be very tractable and thus an ideal "model system" for investigating the metabolic cost of locomotion. Here, we review the biomechanical basis for the metabolic cost of running. Running has been historically modeled as a simple spring-mass system whereby the leg acts as a linear spring, storing, and returning elastic potential energy during stance. However, if running can be modeled as a simple spring-mass system with the underlying assumption of perfect elastic energy storage and return, why does running incur a metabolic cost at all? In 1980, Taylor et al. proposed the "cost of generating force" hypothesis, which was based on the idea that elastic structures allow the muscles to transform metabolic energy into force, and not necessarily mechanical work. In 1990, Kram and Taylor then provided a more explicit and quantitative explanation by demonstrating that the rate of metabolic energy consumption is proportional to body weight and inversely proportional to the time of foot-ground contact for a variety of animals ranging in size and running speed. With a focus on humans, Kram and his colleagues then adopted a task-by-task approach and initially found that the metabolic cost of running could be "individually" partitioned into body weight support (74%), propulsion (37%), and leg-swing (20%). Summing all these biomechanical tasks leads to a paradoxical overestimation of 131%. To further elucidate the possible interactions between these tasks, later studies quantified the reductions in metabolic cost in response to synergistic combinations of body weight support, aiding horizontal forces, and leg-swing-assist forces. This synergistic approach revealed that the interactive nature of body weight support and forward propulsion comprises ∼80% of the net metabolic cost of running. The task of leg-swing at most comprises ∼7% of the net metabolic cost of running and is independent of body weight support and forward propulsion. In our recent experiments, we have continued to refine this task-by-task approach, demonstrating that maintaining lateral balance comprises only 2% of the net metabolic cost of running. In contrast, arm-swing reduces the cost by ∼3%, indicating a net metabolic benefit. Thus, by considering the synergistic nature of body weight support and forward propulsion, as well as the tasks of leg-swing and lateral balance, we can account for 89% of the net metabolic cost of human running. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
First International Diagnosis Competition - DXC'09
NASA Technical Reports Server (NTRS)
Kurtoglu, tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander
2009-01-01
A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC 09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC 09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)
Code of Federal Regulations, 2014 CFR
2014-01-01
... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...
14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)
Code of Federal Regulations, 2010 CFR
2010-01-01
... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...
14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)
Code of Federal Regulations, 2013 CFR
2013-01-01
... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...
14 CFR Appendix K to Part 25 - Extended Operations (ETOPS)
Code of Federal Regulations, 2012 CFR
2012-01-01
... that is time-limited. K25.1.4Propulsion systems. (a) Fuel system design. Fuel necessary to complete an... does not apply to airplanes with a required flight engineer. (b) APU design. If an APU is needed to..., whichever is lower, and run for the remainder of any flight . (c) Engine oil tank design. The engine oil...
2004-02-26
Shorter payback periods After 19 Cost Benefit of Powerlink Rule of Thumb for Powerlink: Powerlink becomes more cost effective beyond 16 controlled...web enabled control (and management software) Increase in level of integration between building systems Increase in new features, functions, benefits ...focus on reducing run-time via Scheduling, Sensing, Switching Growing focus on payback Direct energy cost (with demand) Additional maintenance benefits
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea
2000-01-01
The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.
Effect of Minimalist Footwear on Running Efficiency
Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.
2015-01-01
Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel
2016-08-30
Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.
Ada 9X Project Report: Ada 9X Revision Issues. Release 1
1990-04-01
interrupts in Ada. Users are using specialized run-time executives which promote semaphores , monitors , etc ., as well as interrupt support, are using...The focus here is on two specific problems: 1. lack of time-out on operations . 2. no efficient way to program a shared-variable monitor for the... operation . 43 !Issue implementation [3 - Remote Operations for Real-Time Systems ] The real-time implementation standards should define various remote
Small Unix data acquisition system
NASA Astrophysics Data System (ADS)
Engberg, D.; Glanzman, T.
1994-02-01
A R&D program has been established to investigate the use of Unix in the various aspects of experimental computation. Earlier R&D work investigated the basic real-time aspects of the IBMRS/6000 workstation running AIX, which claims to be a real-time operating system. The next step in this R&D is the construction of prototype data acquisition system which attempts to exercise many of the features needed in the final on-line system in a realistic situation. For this project, we have combined efforts with a team studying the use of novel cell designs and gas mixtures in a new prototype drift chamber.
NASA Astrophysics Data System (ADS)
Pavlovic, Radenko; Chen, Jack; Beaulieu, Paul-Andre; Anselmp, David; Gravel, Sylvie; Moran, Mike; Menard, Sylvain; Davignon, Didier
2014-05-01
A wildfire emissions processing system has been developed to incorporate near-real-time emissions from wildfires and large prescribed burns into Environment Canada's real-time GEM-MACH air quality (AQ) forecast system. Since the GEM-MACH forecast domain covers Canada and most of the U.S.A., including Alaska, fire location information is needed for both of these large countries. During AQ model runs, emissions from individual fire sources are injected into elevated model layers based on plume-rise calculations and then transport and chemistry calculations are performed. This "on the fly" approach to the insertion of the fire emissions provides flexibility and efficiency since on-line meteorology is used and computational overhead in emissions pre-processing is reduced. GEM-MACH-FireWork, an experimental wildfire version of GEM-MACH, was run in real-time mode for the summers of 2012 and 2013 in parallel with the normal operational version. 48-hour forecasts were generated every 12 hours (at 00 and 12 UTC). Noticeable improvements in the AQ forecasts for PM2.5 were seen in numerous regions where fire activity was high. Case studies evaluating model performance for specific regions and computed objective scores will be included in this presentation. Using the lessons learned from the last two summers, Environment Canada will continue to work towards the goal of incorporating near-real-time intermittent wildfire emissions into the operational air quality forecast system.
5K Run: 7-Week Training Schedule for Beginners
... This 5K training schedule incorporates a mix of running, walking and resting. This combination helps reduce the ... you'll gradually increase the amount of time running and reduce the amount of time walking. If ...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Cable runs. 113.10-3 Section 113.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING COMMUNICATION AND ALARM SYSTEMS AND EQUIPMENT Fire and Smoke Detecting and Alarm Systems § 113.10-3 Cable runs. Cable runs between...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Cable runs. 113.10-3 Section 113.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING COMMUNICATION AND ALARM SYSTEMS AND EQUIPMENT Fire and Smoke Detecting and Alarm Systems § 113.10-3 Cable runs. Cable runs between...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Cable runs. 113.10-3 Section 113.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING COMMUNICATION AND ALARM SYSTEMS AND EQUIPMENT Fire and Smoke Detecting and Alarm Systems § 113.10-3 Cable runs. Cable runs between...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Cable runs. 113.10-3 Section 113.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING COMMUNICATION AND ALARM SYSTEMS AND EQUIPMENT Fire and Smoke Detecting and Alarm Systems § 113.10-3 Cable runs. Cable runs between...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Cable runs. 113.10-3 Section 113.10-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING COMMUNICATION AND ALARM SYSTEMS AND EQUIPMENT Fire and Smoke Detecting and Alarm Systems § 113.10-3 Cable runs. Cable runs between...
Match running performance and fitness in youth soccer.
Buchheit, M; Mendez-Villanueva, A; Simpson, B M; Bourdon, P C
2010-11-01
The activity profiles of highly trained young soccer players were examined in relation to age, playing position and physical capacity. Time-motion analyses (global positioning system) were performed on 77 (U13-U18; fullbacks [FB], centre-backs [CB], midfielders [MD], wide midfielders [W], second strikers [2 (nd)S] and strikers [S]) during 42 international club games. Total distance covered (TD) and very high-intensity activities (VHIA; >16.1 km·h (-1)) were computed during 186 entire player-matches. Physical capacity was assessed via field test measures (e. g., peak running speed during an incremental field test, VVam-eval). Match running performance showed an increasing trend with age ( P<0.001, partial eta-squared (η (2)): 0.20-0.45). When adjusted for age and individual playing time, match running performance was position-dependent ( P<0.001, η (2): 0.13-0.40). MD covered the greater TD; CB the lowest ( P<0.05). Distance for VHIA was lower for CB compared with all other positions ( P<0.05); W and S displayed the highest VHIA ( P<0.05). Relationships between match running performance and physical capacities were position-dependent, with poor or non-significant correlations within FB, CB, MD and W (e. g., VHIA vs. VVam-eval: R=0.06 in FB) but large associations within 2 (nd)S and S positions (e. g., VHIA vs. VVam-eval: R=0.70 in 2 (nd)S). In highly trained young soccer players, the importance of fitness level as a determinant of match running performance should be regarded as a function of playing position.
Metabolic power demands of rugby league match play.
Kempton, Tom; Sirotic, Anita Claire; Rampinini, Ermanno; Coutts, Aaron James
2015-01-01
To describe the metabolic demands of rugby league match play for positional groups and compare match distances obtained from high-speed-running classifications with those derived from high metabolic power. Global positioning system (GPS) data were collected from 25 players from a team competing in the National Rugby League competition over 39 matches. Players were classified into positional groups (adjustables, outside backs, hit-up forwards, and wide-running forwards). The GPS devices provided instantaneous raw velocity data at 5 Hz, which were exported to a customized spreadsheet. The spreadsheet provided calculations for speed-based distances (eg, total distance; high-speed running, >14.4 km/h; and very-high-speed running, >18.1 km/h) and metabolic-power variables (eg, energy expenditure; average metabolic power; and high-power distance, >20 W/kg). The data show that speed-based distances and metabolic power varied between positional groups, although this was largely related to differences in time spent on field. The distance covered at high running speed was lower than that obtained from high-power thresholds for all positional groups; however, the difference between the 2 methods was greatest for hit-up forwards and adjustables. Positional differences existed for all metabolic parameters, although these are at least partially related to time spent on the field. Higher-speed running may underestimate the demands of match play when compared with high-power distance-although the degree of difference between the measures varied by position. The analysis of metabolic power may complement traditional speed-based classifications and improve our understanding of the demands of rugby league match play.
Effects of a minimalist shoe on running economy and 5-km running performance.
Fuller, Joel T; Thewlis, Dominic; Tsiros, Margarita D; Brown, Nicholas A T; Buckley, Jonathan D
2016-09-01
The purpose of this study was to determine if minimalist shoes improve time trial performance of trained distance runners and if changes in running economy, shoe mass, stride length, stride rate and footfall pattern were related to any difference in performance. Twenty-six trained runners performed three 6-min sub-maximal treadmill runs at 11, 13 and 15 km·h(-1) in minimalist and conventional shoes while running economy, stride length, stride rate and footfall pattern were assessed. They then performed a 5-km time trial. In the minimalist shoe, runners completed the trial in less time (effect size 0.20 ± 0.12), were more economical during sub-maximal running (effect size 0.33 ± 0.14) and decreased stride length (effect size 0.22 ± 0.10) and increased stride rate (effect size 0.22 ± 0.11). All but one runner ran with a rearfoot footfall in the minimalist shoe. Improvements in time trial performance were associated with improvements in running economy at 15 km·h(-1) (r = 0.58), with 79% of the improved economy accounted for by reduced shoe mass (P < 0.05). The results suggest that running in minimalist shoes improves running economy and 5-km running performance.
Sex-related differences in the wheel-running activity of mice decline with increasing age.
Bartling, Babett; Al-Robaiy, Samiya; Lehnich, Holger; Binder, Leonore; Hiebl, Bernhard; Simm, Andreas
2017-01-01
Laboratory mice of both sexes having free access to running wheels are commonly used to study mechanisms underlying the beneficial effects of physical exercise on health and aging in human. However, comparative wheel-running activity profiles of male and female mice for a long period of time in which increasing age plays an additional role are unknown. Therefore, we permanently recorded the wheel-running activity (i.e., total distance, median velocity, time of breaks) of female and male mice until 9months of age. Our records indicated higher wheel-running distances for females than males which were highest in 2-month-old mice. This was mainly reached by higher running velocities of the females and not by longer running times. However, the sex-related differences declined in parallel to the age-associated reduction in wheel-running activities. Female mice also showed more variances between the weekly running distances than males, which were recorded most often for females being 4-6months old but not older. Additional records of 24-month-old mice of both sexes indicated highly reduced wheel-running activities at old age. Surprisingly, this reduction at old age resulted mainly from lower running velocities and not from shorter running times. Old mice also differed in their course of night activity which peaked later compared to younger mice. In summary, we demonstrated the influence of sex on the age-dependent activity profile of mice which is somewhat contrasting to humans, and this has to be considered when transferring exercise-mediated mechanism from mouse to human. Copyright © 2016. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Campbell, R. H.; Essick, Ray B.; Johnston, Gary; Kenny, Kevin; Russo, Vince
1987-01-01
Project EOS is studying the problems of building adaptable real-time embedded operating systems for the scientific missions of NASA. Choices (A Class Hierarchical Open Interface for Custom Embedded Systems) is an operating system designed and built by Project EOS to address the following specific issues: the software architecture for adaptable embedded parallel operating systems, the achievement of high-performance and real-time operation, the simplification of interprocess communications, the isolation of operating system mechanisms from one another, and the separation of mechanisms from policy decisions. Choices is written in C++ and runs on a ten processor Encore Multimax. The system is intended for use in constructing specialized computer applications and research on advanced operating system features including fault tolerance and parallelism.
NASA Technical Reports Server (NTRS)
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
Adaptive DFT-based Interferometer Fringe Tracking
NASA Technical Reports Server (NTRS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2004-01-01
An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.
An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.
ERIC Educational Resources Information Center
Gonzales, Michael G.
1984-01-01
Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)
Real-time two-dimensional temperature imaging using ultrasound.
Liu, Dalong; Ebbini, Emad S
2009-01-01
We present a system for real-time 2D imaging of temperature change in tissue media using pulse-echo ultrasound. The frontend of the system is a SonixRP ultrasound scanner with a research interface giving us the capability of controlling the beam sequence and accessing radio frequency (RF) data in real-time. The beamformed RF data is streamlined to the backend of the system, where the data is processed using a two-dimensional temperature estimation algorithm running in the graphics processing unit (GPU). The estimated temperature is displayed in real-time providing feedback that can be used for real-time control of the heating source. Currently we have verified our system with elastography tissue mimicking phantom and in vitro porcine heart tissue, excellent repeatability and sensitivity were demonstrated.
Hoffman, J R
1997-07-01
The relationship between aerobic fitness and recovery from high-intensity exercise was examined in 197 infantry soldiers. Aerobic fitness was determined by a maximal-effort, 2,000-m run (RUN). High-intensity exercise consisted of three bouts of a continuous 140-m sprint with several changes of direction. A 2-minute passive rest separated each sprint. A fatigue index was developed by dividing the mean time of the three sprints by the fastest time. Times for the RUN were converted into standardized T scores and separated into five groups (group 1 had the slowest run time and group 5 had the fastest run time). Significant differences in the fatigue index were seen between group 1 (4.9 +/- 2.4%) and groups 3 (2.6 +/- 1.7%), 4 (2.3 +/- 1.6%), and 5 (2.3 +/- 1.3%). It appears that recovery from high-intensity exercise is improved at higher levels of aerobic fitness (faster time for the RUN). However, as the level of aerobic fitness improves above the population mean, no further benefit in the recovery rate from high-intensity exercise is apparent.
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
The Behavior of TCP and Its Extensions in Space
NASA Technical Reports Server (NTRS)
Wang, Ruhai; Horan, Stephen
2001-01-01
The performance of Transmission Control Protocol (TCP) in space has been examined from the observations of simulation and experimental tests for several years at National Aeronautics and Space Administration (NASA), Department of Defense (DoD) and universities. At New Mexico State University (NMSU), we have been concentrating on studying the performance of two protocol suites: the file transfer protocol (ftp) running over Transmission Control Protocol/Internet Protocol (TCP/IP) stack and the file protocol (fp) running over the Space Communications Protocol Standards (SCPS)-Transport Protocol (TP) developed under the Consultative Committee for Space Data Systems (CCSDS) standards process. SCPS-TP is considered to be TCP's extensions for space communications. This dissertation experimentally studies the behavior of TCP and SCPS-TP by running the protocol suites over both the Space-to-Ground Link Simulator (SGLS) test-bed and realistic satellite link. The study concentrates on comparing protocol behavior by plotting the averaged file transfer times for different experimental configurations and analyzing them using Statistical Analysis System (SAS) based procedures. The effects of different link delays and various Bit-Error-Rates (BERS) on each protocol performance are also studied and linear regression models are built for experiments over SGLS test-bed to reflect the relationships between the file transfer time and various transmission conditions.
A submerged tubular ceramic membrane bioreactor for high strength wastewater treatment.
Sun, D D; Zeng, J L; Tay, J H
2003-01-01
A 4 L submerged tubular ceramic membrane bioreactor (MBR) was applied in laboratory scale to treat 2,400 mg-COD/L high strength wastewater. A prolonged sludge retention time (SRT) of 200 day, in contrast to the conventional SRT of 5 to 15 days, was explored in this study, aiming to reduce substantially the amount of disposed sludge. The MBR system was operated for a period of 142 days in four runs, differentiated by specific oxygen utilization rate (SOUR) and hydraulic retention time (HRT). It was found that the MBR system produced more than 99% of suspended solid reduction. Mixed liquor suspended solids (MLSS) was found to be adversely proportional to HRT, and in general higher than the value from a conventional wastewater treatment plant. A chemical oxygen demand (COD) removal efficiency was achieved as high as 98% in Run 1, when SOUR was in the range of 100-200 mg-O/g-MLVSS/hr. Unexpectedly, the COD removal efficiency in Run 2 to 4 was higher than 92%, on average, where higher HRT and abnormally low SOUR of 20-30 mg-O/g-MLVSS/hr prevailed. It was noted that the ceramic membrane presented a significant soluble nutrient rejection when the microbial metabolism of biological treatment broke down.
Ortiz, X A; Smith, J F; Bradford, B J; Harner, J P; Oddy, A
2010-10-01
Two experiments were conducted on a commercial dairy farm to describe the effects of a reduction in Korral Kool (KK; Korral Kool Inc., Mesa, AZ) system operating time on core body temperature (CBT) of primiparous and multiparous cows. In the first experiment, KK systems were operated for 18, 21, or 24 h/d while CBT of 63 multiparous Holstein dairy cows was monitored. All treatments started at 0600 h, and KK systems were turned off at 0000 h and 0300 h for the 18-h and 21-h treatments, respectively. Animals were housed in 9 pens and assigned randomly to treatment sequences in a 3 × 3 Latin square design. In the second experiment, 21 multiparous and 21 primiparous cows were housed in 6 pens and assigned randomly to treatment sequences (KK operated for 21 or 24 h/d) in a switchback design. All treatments started at 0600 h, and KK systems were turned off at 0300 h for the 21-h treatments. In experiment 1, cows in the 24-h treatment had a lower mean CBT than cows in the 18- and 21-h treatments (38.97, 39.08, and 39.03±0.04°C, respectively). The significant treatment by time interaction showed that the greatest treatment effects occurred at 0600 h; treatment means at this time were 39.43, 39.37, and 38.88±0.18°C for 18-, 21-, and 24-h treatments, respectively. These results demonstrate that a reduction in KK system running time of ≥3 h/d will increase CBT. In experiment 2, a significant parity by treatment interaction was found. Multiparous cows on the 24-h treatment had lower mean CBT than cows on the 21-h treatment (39.23 and 39.45±0.17°C, respectively), but treatment had no effect on mean CBT of primiparous cows (39.50 and 39.63±0.20°C for 21- and 24-h treatments, respectively). A significant treatment by time interaction was observed, with the greatest treatment effects occurring at 0500 h; treatment means at this time were 39.57, 39.23, 39.89, and 39.04±0.24°C for 21-h primiparous, 24-h primiparous, 21-h multiparous, and 24-h multiparous cows, respectively. These results demonstrate that multiparous and primiparous cows respond differently when KK system running time decreases from 24 to 21 h. We conclude that in desert climates, the KK system should be operated continuously to decrease heat stress of multiparous dairy cows, but that operating time could be reduced from 24 to 21 h for primiparous cows. Reducing system operation time should be done carefully, however, because CBT was elevated in all treatments. Copyright © 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Greuter, U.; Buehler, C.; Rasmussen, P.; Emmenegger, M.; Maden, D.; Koennecke, M.; Schlumpf, N.
We present the basic concept and the realization of our fully configurable data-acquisition hardware for the neutron scattering instruments at SINQ. This system allows collection of the different data entities and event-related signals generated by the various detector units. It offers a variety of synchronization options, including a time-measuring mode for time-of-flight determinations. Based on configurable logic (FPGA, CPLD), event rates up to the MHz range can be processed and transmitted to a programmable online data-reduction system (Histogram Memory). It is implemented on a commercially available VME Power PC module running a real-time operating system (VxWorks).
Context-aware distributed cloud computing using CloudScheduler
NASA Astrophysics Data System (ADS)
Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.
2017-10-01
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
Noack, Marko; Partzsch, Johannes; Mayr, Christian G.; Hänzsche, Stefan; Scholze, Stefan; Höppner, Sebastian; Ellguth, Georg; Schüffny, Rene
2015-01-01
Synaptic dynamics, such as long- and short-term plasticity, play an important role in the complexity and biological realism achievable when running neural networks on a neuromorphic IC. For example, they endow the IC with an ability to adapt and learn from its environment. In order to achieve the millisecond to second time constants required for these synaptic dynamics, analog subthreshold circuits are usually employed. However, due to process variation and leakage problems, it is almost impossible to port these types of circuits to modern sub-100nm technologies. In contrast, we present a neuromorphic system in a 28 nm CMOS process that employs switched capacitor (SC) circuits to implement 128 short term plasticity presynapses as well as 8192 stop-learning synapses. The neuromorphic system consumes an area of 0.36 mm2 and runs at a power consumption of 1.9 mW. The circuit makes use of a technique for minimizing leakage effects allowing for real-time operation with time constants up to several seconds. Since we rely on SC techniques for all calculations, the system is composed of only generic mixed-signal building blocks. These generic building blocks make the system easy to port between technologies and the large digital circuit part inherent in an SC system benefits fully from technology scaling. PMID:25698914
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
Principal Investigator in a Box Technical Description Document. 2.0
NASA Technical Reports Server (NTRS)
Groleau, Nick; Frainier, Richard
1994-01-01
This document provides a brief overview of the PI-in-a-Box system, which can be used for automatic real-time reaction to incoming data. We will therefore outline the current system's capabilities and limitations, and hint at how best to think about PI-in-a-Box as a tool for real-time analysis and reaction in section two, below. We also believe that the solution to many commercial real-time process problems requires data acquisition and analysis combined with rule-based reasoning and/or an intuitive user interface. We will develop the technology reuse potential in section three. Currently, the system runs only on Apple Computer's Macintosh series.
Experiences running NASTRAN on the Microvax 2 computer
NASA Technical Reports Server (NTRS)
Butler, Thomas G.; Mitchell, Reginald S.
1987-01-01
The MicroVAX operates NASTRAN so well that the only detectable difference in its operation compared to an 11/780 VAX is in the execution time. On the modest installation described here, the engineer has all of the tools he needs to do an excellent job of analysis. System configuration decisions, system sizing, preparation of the system disk, definition of user quotas, installation, monitoring of system errors, and operation policies are discussed.
Dorn, Tim W; Schache, Anthony G; Pandy, Marcus G
2012-06-01
Humans run faster by increasing a combination of stride length and stride frequency. In slow and medium-paced running, stride length is increased by exerting larger support forces during ground contact, whereas in fast running and sprinting, stride frequency is increased by swinging the legs more rapidly through the air. Many studies have investigated the mechanics of human running, yet little is known about how the individual leg muscles accelerate the joints and centre of mass during this task. The aim of this study was to describe and explain the synergistic actions of the individual leg muscles over a wide range of running speeds, from slow running to maximal sprinting. Experimental gait data from nine subjects were combined with a detailed computer model of the musculoskeletal system to determine the forces developed by the leg muscles at different running speeds. For speeds up to 7 m s(-1), the ankle plantarflexors, soleus and gastrocnemius, contributed most significantly to vertical support forces and hence increases in stride length. At speeds greater than 7 m s(-1), these muscles shortened at relatively high velocities and had less time to generate the forces needed for support. Thus, above 7 m s(-1), the strategy used to increase running speed shifted to the goal of increasing stride frequency. The hip muscles, primarily the iliopsoas, gluteus maximus and hamstrings, achieved this goal by accelerating the hip and knee joints more vigorously during swing. These findings provide insight into the strategies used by the leg muscles to maximise running performance and have implications for the design of athletic training programs.
Resource Limitation Issues In Real-Time Intelligent Systems
NASA Astrophysics Data System (ADS)
Green, Peter E.
1986-03-01
This paper examines resource limitation problems that can occur in embedded AI systems which have to run in real-time. It does this by examining two case studies. The first is a system which acoustically tracks low-flying aircraft and has the problem of interpreting a high volume of often ambiguous input data to produce a model of the system's external world. The second is a robotics problem in which the controller for a robot arm has to dynamically plan the order in which to pick up pieces from a conveyer belt and sort them into bins. In this case the system starts with a continuously changing model of its environment and has to select which action to perform next. This latter case emphasizes the issues in designing a system which must operate in an uncertain and rapidly changing environment. The first system uses a distributed HEARSAY methodology running on multiple processors. It is shown, in this case, how the com-binatorial growth of possible interpretation of the input data can require large and unpredictable amounts of computer resources for data interpretation. Techniques are presented which achieve real-time operation by limiting the combinatorial growth of alternate hypotheses and processing those hypotheses that are most likely to lead to meaningful interpretation of the input data. The second system uses a decision tree approach to generate and evaluate possible plans of action. It is shown how the combina-torial growth of possible alternate plans can, as in the previous case, require large and unpredictable amounts of computer time to evalu-ate and select from amongst the alternative. The use of approximate decisions to limit the amount of computer time needed is discussed. The use of concept of using incremental evidence is then introduced and it is shown how this can be used as the basis of systems that can combine heuristic and approximate evidence in making real-time decisions.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José
2015-01-01
This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production. PMID:26068216
Flywheel Energy Storage System Designed for the International Space Station
NASA Technical Reports Server (NTRS)
Delventhal, Rex A.
2002-01-01
Following successful operation of a developmental flywheel energy storage system in fiscal year 2000, researchers at the NASA Glenn Research Center began developing a flight design of a flywheel system for the International Space Station (ISS). In such an application, a two-flywheel system can replace one of the nickel-hydrogen battery strings in the ISS power system. The development unit, sized at approximately one-eighth the size needed for ISS was run at 60,000 rpm. The design point for the flight unit is a larger composite flywheel, approximately 17 in. long and 13 in. in diameter, running at 53,000 rpm when fully charged. A single flywheel system stores 2.8 kW-hr of useable energy, enough to light a 100-W light bulb for over 24 hr. When housed in an ISS orbital replacement unit, the flywheel would provide energy storage with approximately 3 times the service life of the nickel-hydrogen battery currently in use.
Tardiole Kuehne, Bruno; Estrella, Julio Cezar; Nunes, Luiz Henrique; Martins de Oliveira, Edvard; Hideo Nakamura, Luis; Gomes Ferreira, Carlos Henrique; Carlucci Santana, Regina Helena; Reiff-Marganiec, Stephan; Santana, Marcos José
2015-01-01
This paper proposes a system named AWSCS (Automatic Web Service Composition System) to evaluate different approaches for automatic composition of Web services, based on QoS parameters that are measured at execution time. The AWSCS is a system to implement different approaches for automatic composition of Web services and also to execute the resulting flows from these approaches. Aiming at demonstrating the results of this paper, a scenario was developed, where empirical flows were built to demonstrate the operation of AWSCS, since algorithms for automatic composition are not readily available to test. The results allow us to study the behaviour of running composite Web services, when flows with the same functionality but different problem-solving strategies were compared. Furthermore, we observed that the influence of the load applied on the running system as the type of load submitted to the system is an important factor to define which approach for the Web service composition can achieve the best performance in production.
Durham extremely large telescope adaptive optics simulation platform.
Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard
2007-03-01
Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.
Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R
2012-02-01
The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.
A Secure and Robust Approach to Software Tamper Resistance
NASA Astrophysics Data System (ADS)
Ghosh, Sudeep; Hiser, Jason D.; Davidson, Jack W.
Software tamper-resistance mechanisms have increasingly assumed significance as a technique to prevent unintended uses of software. Closely related to anti-tampering techniques are obfuscation techniques, which make code difficult to understand or analyze and therefore, challenging to modify meaningfully. This paper describes a secure and robust approach to software tamper resistance and obfuscation using process-level virtualization. The proposed techniques involve novel uses of software check summing guards and encryption to protect an application. In particular, a virtual machine (VM) is assembled with the application at software build time such that the application cannot run without the VM. The VM provides just-in-time decryption of the program and dynamism for the application's code. The application's code is used to protect the VM to ensure a level of circular protection. Finally, to prevent the attacker from obtaining an analyzable snapshot of the code, the VM periodically discards all decrypted code. We describe a prototype implementation of these techniques and evaluate the run-time performance of applications using our system. We also discuss how our system provides stronger protection against tampering attacks than previously described tamper-resistance approaches.
Elixir - how to handle 2 trillion pixels
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Cuillandre, Jean-Charles
2002-12-01
The Elixir system at CFHT provides automatic data quality assurance and calibration for the wide-field mosaic imager camera CFH12K. Elixir consists of a variety of tools, including: a real-time analysis suite which runs at the telescope to provide quick feedback to the observers; a detailed analysis of the calibration data; and an automated pipeline for processing data to be distributed to observers. To date, 2.4 × 1012 night-time sky pixels from CFH12K have been processed by the Elixir system.
The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality
NASA Technical Reports Server (NTRS)
Conway, Darrel J.; Hughes, Steven P.
2010-01-01
The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).
Development of the CELSS Emulator at NASA JSC
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1989-01-01
The Controlled Ecological Life Support System (CELSS) Emulator is under development at the NASA Johnson Space Center (JSC) with the purpose to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. This paper describes Version 1.0 of the CELSS Emulator that was initiated in 1988 on the JSC Multi Purpose Applications Console Test Bed as the simulation framework. The run module of the simulation system now contains a CELSS model called BLSS. The CELSS Emulator makes it possible to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
Development of the CELSS emulator at NASA. Johnson Space Center
NASA Technical Reports Server (NTRS)
Cullingford, Hatice S.
1990-01-01
The Closed Ecological Life Support System (CELSS) Emulator is under development. It will be used to investigate computer simulations of integrated CELSS operations involving humans, plants, and process machinery. Described here is Version 1.0 of the CELSS Emulator that was initiated in 1988 on the Johnson Space Center (JSC) Multi Purpose Applications Console Test Bed as the simulation framework. The run model of the simulation system now contains a CELSS model called BLSS. The CELSS simulator empowers us to generate model data sets, store libraries of results for further analysis, and also display plots of model variables as a function of time. The progress of the project is presented with sample test runs and simulation display pages.
Malisoux, Laurent; Delattre, Nicolas; Urhausen, Axel; Theisen, Daniel
2017-08-21
Repetitive loading of the musculoskeletal system is suggested to be involved in the underlying mechanism of the majority of running-related injuries (RRIs). Accordingly, heavier runners are assumed to be at a higher risk of RRI. The cushioning system of modern running shoes is expected to protect runners again high impact forces, and therefore, RRI. However, the role of shoe cushioning in injury prevention remains unclear. The main aim of this study is to investigate the influence of shoe cushioning and body mass on RRI risk, while exploring simultaneously the association between running technique and RRI risk. This double-blinded randomised controlled trial will involve about 800 healthy leisure-time runners. They will randomly receive one of two running shoe models that will differ in their cushioning properties (ie, stiffness) by ~35%. The participants will perform a running test on an instrumented treadmill at their preferred running speed at baseline. Then they will be followed up prospectively over a 6-month period, during which they will self-report all their sports activities as well as any injury in an internet-based database TIPPS (Training and Injury Prevention Platform for Sports). Cox regression analyses will be used to compare injury risk between the study groups and to investigate the association among training, biomechanical and anatomical risk factors, and injury risk. The study was approved by the National Ethics Committee for Research (Ref: 201701/02 v1.1). Outcomes will be disseminated through publications in peer-reviewed journals, presentations at international conferences, as well as articles in popular magazines and on specialised websites. NCT03115437, Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, B.; /Fermilab
1999-10-08
A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics.more » Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface. Background material pertinent to the BYAC system will cover the separate water and air subsystems and their purposes. In addition programming and system automation will also be covered.« less
Authoritative Authoring: Software That Makes Multimedia Happen.
ERIC Educational Resources Information Center
Florio, Chris; Murie, Michael
1996-01-01
Compares seven mid- to high-end multimedia authoring software systems that combine graphics, sound, animation, video, and text for Windows and Macintosh platforms. A run-time project was created with each program using video, animation, graphics, sound, formatted text, hypertext, and buttons. (LRW)
Users Manual for the Geospatial Stream Flow Model (GeoSFM)
Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James
2008-01-01
The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.
Complex Event Recognition Architecture
NASA Technical Reports Server (NTRS)
Fitzgerald, William A.; Firby, R. James
2009-01-01
Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.
Comparison of Sprint and Run Times with Performance on the Wingate Anaerobic Test.
ERIC Educational Resources Information Center
Tharp, Gerald D.; And Others
1985-01-01
Male volunteers were studied to examine the relationship between the Wingate Anaerobic Test (WAnT) and sprint-run times and to determine the influence of age and weight. Results indicate the WAnT is a moderate predictor of dash and run times but becomes a stronger predictor when adjusted for body weight. (Author/MT)
12 CFR 1102.306 - Procedures for requesting records.
Code of Federal Regulations, 2011 CFR
2011-01-01
... section; (B) Where the running of such time is suspended for the calculation of a cost estimate for the... section; (C) Where the running of such time is suspended for the payment of fees pursuant to the paragraph... of the invoice. (ix) The time limit for the ASC to respond to a request will not begin to run until...
NASA Astrophysics Data System (ADS)
Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas
2013-04-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.
Rehman, M S; Mahmud, A; Mehmood, S; Pasha, T N; Khan, M T; Hussain, J
2018-03-01
The objective of this study was to explore the effects of free-range (FR), part-time free-range (PTFR), and cage system (CS) on behavioral repertoire in Lakha (LK), Mushki (MS), Peshawari (PW), and Sindhi (SN) varieties of Aseel chicken during the growing phase (9 to 18 wk of age). In total, 144 Aseel pullets were allotted to 12 treatment groups in a 3 × 4 (rearing system × Aseel variety) factorial arrangement, according to a randomized complete block design (RCBD). Each treatment group was replicated 3 times with 4 birds in each replicate (12 birds per treatment group). The pullets were randomly marked weekly for identification, and their behavior was observed through the focal animal sampling method. Time spent on different behavioral activities was recorded and converted to a percentage. The data were analyzed using 2-way ANOVA under a factorial arrangement using SAS 9.1, and the behavioral parameters were evaluated. The results indicated greater (P < 0.05) sitting, standing, drinking, preening, and aggressiveness in CS; walking, running, and jumping in PTFR; and foraging and dustbathing in both FR and PTFR, whereas feather pecking was found to be reduced in FR compared with PTFR and CS. Among varieties, PW showed the least feeding/foraging and feather pecking behavior, and greater standing, running, and jumping behavior (P < 0.05). However, SN spent less time in walking and preening, and more time in sitting, drinking, and aggressiveness. Dustbathing was found to be similar in all Aseel varieties (P = 0.135). In conclusion, the PTFR system could be suggested as a substitute for conventional housing systems because it better accommodates normal behavior in Aseel pullets.
Sun Series program for the REEDA System. [predicting orbital lifetime using sunspot values
NASA Technical Reports Server (NTRS)
Shankle, R. W.
1980-01-01
Modifications made to data bases and to four programs in a series of computer programs (Sun Series) which run on the REEDA HP minicomputer system to aid NASA's solar activity predictions used in orbital life time predictions are described. These programs utilize various mathematical smoothing technique and perform statistical and graphical analysis of various solar activity data bases residing on the REEDA System.
Greene, J; Lutz, S; Jaklevic, M C; Japsen, B; Kertesz, L; Shriver, K; Pallarito, K; Scott, L; Morrissey, J; Moore, J D; Burda, D; Fitzgerald, J
1996-01-01
It's put-up-or-shut-up time for healthcare providers in 1996. Two years ago, everyone talked about fixing the healthcare system. Not much happened. Last year, providers and politicians concentrated on squeezing medical costs. According to some of Modern Healthcare's key beat reports, this year it's back to the basics of running a business.
Core Flight System (cFS) a Low Cost Solution for SmallSats
NASA Technical Reports Server (NTRS)
McComas, David; Strege, Susanne; Wilmot, Jonathan
2015-01-01
The cFS is a FSW product line that uses a layered architecture and compile-time configuration parameters which make it portable and scalable for a wide range of platforms. The software layers that defined the application run-time environment are now under a NASA-wide configuration control board with the goal of sustaining an open-source application ecosystem.